00:00:00.000 Started by upstream project "autotest-per-patch" build number 132778 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.086 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.087 The recommended git tool is: git 00:00:00.087 using credential 00000000-0000-0000-0000-000000000002 00:00:00.089 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.146 Fetching changes from the remote Git repository 00:00:00.149 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.205 Using shallow fetch with depth 1 00:00:00.205 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.205 > git --version # timeout=10 00:00:00.253 > git --version # 'git version 2.39.2' 00:00:00.253 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.285 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.285 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.608 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.620 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.634 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.634 > git config core.sparsecheckout # timeout=10 00:00:04.647 > git read-tree -mu HEAD # timeout=10 00:00:04.664 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.693 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.694 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.805 [Pipeline] Start of Pipeline 00:00:04.818 [Pipeline] library 00:00:04.819 Loading library shm_lib@master 00:00:04.819 Library shm_lib@master is cached. Copying from home. 00:00:04.836 [Pipeline] node 00:00:04.845 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.846 [Pipeline] { 00:00:04.857 [Pipeline] catchError 00:00:04.858 [Pipeline] { 00:00:04.872 [Pipeline] wrap 00:00:04.881 [Pipeline] { 00:00:04.889 [Pipeline] stage 00:00:04.891 [Pipeline] { (Prologue) 00:00:05.092 [Pipeline] sh 00:00:05.377 + logger -p user.info -t JENKINS-CI 00:00:05.395 [Pipeline] echo 00:00:05.396 Node: GP6 00:00:05.403 [Pipeline] sh 00:00:05.697 [Pipeline] setCustomBuildProperty 00:00:05.707 [Pipeline] echo 00:00:05.709 Cleanup processes 00:00:05.714 [Pipeline] sh 00:00:05.996 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.996 2341863 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.008 [Pipeline] sh 00:00:06.290 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.290 ++ grep -v 'sudo pgrep' 00:00:06.290 ++ awk '{print $1}' 00:00:06.290 + sudo kill -9 00:00:06.290 + true 00:00:06.302 [Pipeline] cleanWs 00:00:06.310 [WS-CLEANUP] Deleting project workspace... 00:00:06.310 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.315 [WS-CLEANUP] done 00:00:06.319 [Pipeline] setCustomBuildProperty 00:00:06.328 [Pipeline] sh 00:00:06.609 + sudo git config --global --replace-all safe.directory '*' 00:00:06.707 [Pipeline] httpRequest 00:00:08.813 [Pipeline] echo 00:00:08.815 Sorcerer 10.211.164.101 is alive 00:00:08.825 [Pipeline] retry 00:00:08.827 [Pipeline] { 00:00:08.842 [Pipeline] httpRequest 00:00:08.847 HttpMethod: GET 00:00:08.848 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.848 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.872 Response Code: HTTP/1.1 200 OK 00:00:08.872 Success: Status code 200 is in the accepted range: 200,404 00:00:08.872 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.564 [Pipeline] } 00:00:17.578 [Pipeline] // retry 00:00:17.585 [Pipeline] sh 00:00:17.871 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:17.884 [Pipeline] httpRequest 00:00:19.415 [Pipeline] echo 00:00:19.416 Sorcerer 10.211.164.101 is alive 00:00:19.423 [Pipeline] retry 00:00:19.424 [Pipeline] { 00:00:19.435 [Pipeline] httpRequest 00:00:19.439 HttpMethod: GET 00:00:19.439 URL: http://10.211.164.101/packages/spdk_6c714c5fea2826b2c9c61c0c2d41ed61ac736dc0.tar.gz 00:00:19.440 Sending request to url: http://10.211.164.101/packages/spdk_6c714c5fea2826b2c9c61c0c2d41ed61ac736dc0.tar.gz 00:00:19.453 Response Code: HTTP/1.1 200 OK 00:00:19.453 Success: Status code 200 is in the accepted range: 200,404 00:00:19.453 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6c714c5fea2826b2c9c61c0c2d41ed61ac736dc0.tar.gz 00:04:20.452 [Pipeline] } 00:04:20.467 [Pipeline] // retry 00:04:20.473 [Pipeline] sh 00:04:20.764 + tar --no-same-owner -xf spdk_6c714c5fea2826b2c9c61c0c2d41ed61ac736dc0.tar.gz 00:04:23.322 [Pipeline] sh 00:04:23.606 + git -C spdk log --oneline -n5 00:04:23.606 6c714c5fe env: add mem_map_fini and vtophys_fini for cleanup 00:04:23.606 b7d7c4b24 env: handle possible DPDK errors in mem_map_init 00:04:23.606 b71c8b8dd env: explicitly set --legacy-mem flag in no hugepages mode 00:04:23.606 496bfd677 env: match legacy mem mode config with DPDK 00:04:23.606 a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails 00:04:23.618 [Pipeline] } 00:04:23.631 [Pipeline] // stage 00:04:23.640 [Pipeline] stage 00:04:23.642 [Pipeline] { (Prepare) 00:04:23.660 [Pipeline] writeFile 00:04:23.676 [Pipeline] sh 00:04:23.965 + logger -p user.info -t JENKINS-CI 00:04:23.981 [Pipeline] sh 00:04:24.268 + logger -p user.info -t JENKINS-CI 00:04:24.281 [Pipeline] sh 00:04:24.567 + cat autorun-spdk.conf 00:04:24.567 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:24.567 SPDK_TEST_NVMF=1 00:04:24.567 SPDK_TEST_NVME_CLI=1 00:04:24.567 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:24.567 SPDK_TEST_NVMF_NICS=e810 00:04:24.567 SPDK_TEST_VFIOUSER=1 00:04:24.567 SPDK_RUN_UBSAN=1 00:04:24.567 NET_TYPE=phy 00:04:24.575 RUN_NIGHTLY=0 00:04:24.581 [Pipeline] readFile 00:04:24.615 [Pipeline] withEnv 00:04:24.618 [Pipeline] { 00:04:24.632 [Pipeline] sh 00:04:24.922 + set -ex 00:04:24.922 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:24.922 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:24.922 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:24.922 ++ SPDK_TEST_NVMF=1 00:04:24.922 ++ SPDK_TEST_NVME_CLI=1 00:04:24.922 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:24.922 ++ SPDK_TEST_NVMF_NICS=e810 00:04:24.922 ++ SPDK_TEST_VFIOUSER=1 00:04:24.922 ++ SPDK_RUN_UBSAN=1 00:04:24.922 ++ NET_TYPE=phy 00:04:24.922 ++ RUN_NIGHTLY=0 00:04:24.922 + case $SPDK_TEST_NVMF_NICS in 00:04:24.922 + DRIVERS=ice 00:04:24.922 + [[ tcp == \r\d\m\a ]] 00:04:24.922 + [[ -n ice ]] 00:04:24.922 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:24.922 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:24.922 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:24.922 rmmod: ERROR: Module irdma is not currently loaded 00:04:24.922 rmmod: ERROR: Module i40iw is not currently loaded 00:04:24.922 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:24.922 + true 00:04:24.922 + for D in $DRIVERS 00:04:24.922 + sudo modprobe ice 00:04:24.922 + exit 0 00:04:24.932 [Pipeline] } 00:04:24.945 [Pipeline] // withEnv 00:04:24.949 [Pipeline] } 00:04:24.964 [Pipeline] // stage 00:04:24.972 [Pipeline] catchError 00:04:24.973 [Pipeline] { 00:04:24.987 [Pipeline] timeout 00:04:24.988 Timeout set to expire in 1 hr 0 min 00:04:24.990 [Pipeline] { 00:04:25.004 [Pipeline] stage 00:04:25.007 [Pipeline] { (Tests) 00:04:25.023 [Pipeline] sh 00:04:25.314 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:25.314 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:25.314 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:25.314 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:25.314 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:25.314 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:25.314 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:25.314 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:25.314 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:25.314 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:25.314 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:25.314 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:25.314 + source /etc/os-release 00:04:25.314 ++ NAME='Fedora Linux' 00:04:25.314 ++ VERSION='39 (Cloud Edition)' 00:04:25.314 ++ ID=fedora 00:04:25.314 ++ VERSION_ID=39 00:04:25.314 ++ VERSION_CODENAME= 00:04:25.314 ++ PLATFORM_ID=platform:f39 00:04:25.314 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:25.314 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:25.314 ++ LOGO=fedora-logo-icon 00:04:25.314 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:25.314 ++ HOME_URL=https://fedoraproject.org/ 00:04:25.314 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:25.314 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:25.314 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:25.314 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:25.314 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:25.314 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:25.314 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:25.314 ++ SUPPORT_END=2024-11-12 00:04:25.314 ++ VARIANT='Cloud Edition' 00:04:25.314 ++ VARIANT_ID=cloud 00:04:25.314 + uname -a 00:04:25.314 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:04:25.314 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:26.252 Hugepages 00:04:26.252 node hugesize free / total 00:04:26.252 node0 1048576kB 0 / 0 00:04:26.252 node0 2048kB 0 / 0 00:04:26.252 node1 1048576kB 0 / 0 00:04:26.252 node1 2048kB 0 / 0 00:04:26.252 00:04:26.252 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:26.252 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:26.252 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:26.252 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:26.252 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:26.252 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:26.252 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:26.252 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:26.253 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:26.253 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:26.253 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:26.253 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:26.253 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:26.253 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:26.253 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:26.253 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:26.253 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:26.253 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:26.253 + rm -f /tmp/spdk-ld-path 00:04:26.253 + source autorun-spdk.conf 00:04:26.253 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:26.253 ++ SPDK_TEST_NVMF=1 00:04:26.253 ++ SPDK_TEST_NVME_CLI=1 00:04:26.253 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:26.253 ++ SPDK_TEST_NVMF_NICS=e810 00:04:26.253 ++ SPDK_TEST_VFIOUSER=1 00:04:26.253 ++ SPDK_RUN_UBSAN=1 00:04:26.253 ++ NET_TYPE=phy 00:04:26.253 ++ RUN_NIGHTLY=0 00:04:26.253 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:26.253 + [[ -n '' ]] 00:04:26.253 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:26.511 + for M in /var/spdk/build-*-manifest.txt 00:04:26.511 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:26.511 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:26.511 + for M in /var/spdk/build-*-manifest.txt 00:04:26.511 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:26.511 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:26.511 + for M in /var/spdk/build-*-manifest.txt 00:04:26.511 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:26.511 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:26.511 ++ uname 00:04:26.511 + [[ Linux == \L\i\n\u\x ]] 00:04:26.511 + sudo dmesg -T 00:04:26.511 + sudo dmesg --clear 00:04:26.511 + dmesg_pid=2343184 00:04:26.511 + [[ Fedora Linux == FreeBSD ]] 00:04:26.511 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:26.511 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:26.511 + sudo dmesg -Tw 00:04:26.511 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:26.511 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:04:26.511 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:04:26.511 + [[ -x /usr/src/fio-static/fio ]] 00:04:26.511 + export FIO_BIN=/usr/src/fio-static/fio 00:04:26.511 + FIO_BIN=/usr/src/fio-static/fio 00:04:26.511 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:26.511 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:26.511 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:26.511 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:26.511 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:26.511 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:26.511 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:26.511 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:26.511 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:26.511 10:14:58 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:26.511 10:14:58 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:26.511 10:14:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:26.511 10:14:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:04:26.511 10:14:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:04:26.511 10:14:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:26.511 10:14:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:04:26.511 10:14:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:04:26.511 10:14:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:04:26.511 10:14:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:04:26.511 10:14:58 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:04:26.511 10:14:58 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:04:26.511 10:14:58 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:26.511 10:14:58 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:04:26.511 10:14:58 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:26.511 10:14:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:26.511 10:14:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:26.511 10:14:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:26.511 10:14:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:26.511 10:14:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.511 10:14:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.511 10:14:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.511 10:14:58 -- paths/export.sh@5 -- $ export PATH 00:04:26.511 10:14:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:26.511 10:14:58 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:26.511 10:14:58 -- common/autobuild_common.sh@493 -- $ date +%s 00:04:26.511 10:14:58 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733735698.XXXXXX 00:04:26.511 10:14:58 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733735698.OCPT9F 00:04:26.511 10:14:58 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:04:26.511 10:14:58 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:04:26.511 10:14:58 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:26.511 10:14:58 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:26.511 10:14:58 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:26.511 10:14:58 -- common/autobuild_common.sh@509 -- $ get_config_params 00:04:26.511 10:14:58 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:04:26.511 10:14:58 -- common/autotest_common.sh@10 -- $ set +x 00:04:26.511 10:14:58 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:26.511 10:14:58 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:04:26.511 10:14:58 -- pm/common@17 -- $ local monitor 00:04:26.511 10:14:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.511 10:14:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.511 10:14:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.511 10:14:58 -- pm/common@21 -- $ date +%s 00:04:26.511 10:14:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.511 10:14:58 -- pm/common@21 -- $ date +%s 00:04:26.511 10:14:58 -- pm/common@25 -- $ sleep 1 00:04:26.511 10:14:58 -- pm/common@21 -- $ date +%s 00:04:26.511 10:14:58 -- pm/common@21 -- $ date +%s 00:04:26.511 10:14:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735698 00:04:26.511 10:14:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735698 00:04:26.511 10:14:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735698 00:04:26.512 10:14:58 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733735698 00:04:26.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735698_collect-vmstat.pm.log 00:04:26.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735698_collect-cpu-load.pm.log 00:04:26.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735698_collect-cpu-temp.pm.log 00:04:26.512 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733735698_collect-bmc-pm.bmc.pm.log 00:04:27.461 10:14:59 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:04:27.461 10:14:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:04:27.461 10:14:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:04:27.461 10:14:59 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:27.461 10:14:59 -- spdk/autobuild.sh@16 -- $ date -u 00:04:27.461 Mon Dec 9 09:14:59 AM UTC 2024 00:04:27.461 10:14:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:04:27.461 v25.01-pre-315-g6c714c5fe 00:04:27.461 10:14:59 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:04:27.461 10:14:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:04:27.461 10:14:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:04:27.461 10:14:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:27.461 10:14:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:27.461 10:14:59 -- common/autotest_common.sh@10 -- $ set +x 00:04:27.719 ************************************ 00:04:27.719 START TEST ubsan 00:04:27.719 ************************************ 00:04:27.719 10:14:59 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:04:27.719 using ubsan 00:04:27.719 00:04:27.719 real 0m0.000s 00:04:27.719 user 0m0.000s 00:04:27.719 sys 0m0.000s 00:04:27.719 10:14:59 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:27.719 10:14:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:04:27.719 ************************************ 00:04:27.719 END TEST ubsan 00:04:27.719 ************************************ 00:04:27.719 10:14:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:04:27.719 10:14:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:04:27.719 10:14:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:04:27.719 10:14:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:04:27.719 10:14:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:04:27.719 10:14:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:04:27.719 10:14:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:04:27.719 10:14:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:04:27.719 10:14:59 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:04:27.719 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:04:27.719 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:27.976 Using 'verbs' RDMA provider 00:04:38.524 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:48.515 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:48.515 Creating mk/config.mk...done. 00:04:48.515 Creating mk/cc.flags.mk...done. 00:04:48.515 Type 'make' to build. 00:04:48.515 10:15:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:04:48.515 10:15:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:48.515 10:15:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:48.515 10:15:20 -- common/autotest_common.sh@10 -- $ set +x 00:04:48.515 ************************************ 00:04:48.515 START TEST make 00:04:48.515 ************************************ 00:04:48.515 10:15:20 make -- common/autotest_common.sh@1129 -- $ make -j48 00:04:48.772 make[1]: Nothing to be done for 'all'. 00:04:50.741 The Meson build system 00:04:50.741 Version: 1.5.0 00:04:50.741 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:50.741 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:50.741 Build type: native build 00:04:50.741 Project name: libvfio-user 00:04:50.741 Project version: 0.0.1 00:04:50.741 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:50.741 C linker for the host machine: cc ld.bfd 2.40-14 00:04:50.741 Host machine cpu family: x86_64 00:04:50.741 Host machine cpu: x86_64 00:04:50.741 Run-time dependency threads found: YES 00:04:50.741 Library dl found: YES 00:04:50.741 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:50.741 Run-time dependency json-c found: YES 0.17 00:04:50.741 Run-time dependency cmocka found: YES 1.1.7 00:04:50.741 Program pytest-3 found: NO 00:04:50.741 Program flake8 found: NO 00:04:50.741 Program misspell-fixer found: NO 00:04:50.741 Program restructuredtext-lint found: NO 00:04:50.741 Program valgrind found: YES (/usr/bin/valgrind) 00:04:50.741 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:50.741 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:50.741 Compiler for C supports arguments -Wwrite-strings: YES 00:04:50.741 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:50.741 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:50.741 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:50.741 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:50.741 Build targets in project: 8 00:04:50.741 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:50.741 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:50.741 00:04:50.741 libvfio-user 0.0.1 00:04:50.741 00:04:50.741 User defined options 00:04:50.741 buildtype : debug 00:04:50.741 default_library: shared 00:04:50.741 libdir : /usr/local/lib 00:04:50.741 00:04:50.741 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:51.691 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:51.691 [1/37] Compiling C object samples/null.p/null.c.o 00:04:51.691 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:51.691 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:51.691 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:51.691 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:51.691 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:51.691 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:51.691 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:51.951 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:51.951 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:51.951 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:51.951 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:51.951 [13/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:51.951 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:51.951 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:51.951 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:51.951 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:51.951 [18/37] Compiling C object samples/client.p/client.c.o 00:04:51.951 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:51.951 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:51.951 [21/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:51.951 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:51.951 [23/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:51.951 [24/37] Compiling C object samples/server.p/server.c.o 00:04:51.951 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:51.951 [26/37] Linking target samples/client 00:04:51.951 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:51.951 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:52.209 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:52.209 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:52.209 [31/37] Linking target test/unit_tests 00:04:52.209 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:52.472 [33/37] Linking target samples/server 00:04:52.472 [34/37] Linking target samples/shadow_ioeventfd_server 00:04:52.472 [35/37] Linking target samples/lspci 00:04:52.472 [36/37] Linking target samples/null 00:04:52.472 [37/37] Linking target samples/gpio-pci-idio-16 00:04:52.472 INFO: autodetecting backend as ninja 00:04:52.472 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:52.472 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:53.416 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:53.416 ninja: no work to do. 00:04:58.679 The Meson build system 00:04:58.679 Version: 1.5.0 00:04:58.679 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:58.679 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:58.679 Build type: native build 00:04:58.679 Program cat found: YES (/usr/bin/cat) 00:04:58.679 Project name: DPDK 00:04:58.679 Project version: 24.03.0 00:04:58.679 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:58.679 C linker for the host machine: cc ld.bfd 2.40-14 00:04:58.679 Host machine cpu family: x86_64 00:04:58.679 Host machine cpu: x86_64 00:04:58.680 Message: ## Building in Developer Mode ## 00:04:58.680 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:58.680 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:58.680 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:58.680 Program python3 found: YES (/usr/bin/python3) 00:04:58.680 Program cat found: YES (/usr/bin/cat) 00:04:58.680 Compiler for C supports arguments -march=native: YES 00:04:58.680 Checking for size of "void *" : 8 00:04:58.680 Checking for size of "void *" : 8 (cached) 00:04:58.680 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:58.680 Library m found: YES 00:04:58.680 Library numa found: YES 00:04:58.680 Has header "numaif.h" : YES 00:04:58.680 Library fdt found: NO 00:04:58.680 Library execinfo found: NO 00:04:58.680 Has header "execinfo.h" : YES 00:04:58.680 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:58.680 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:58.680 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:58.680 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:58.680 Run-time dependency openssl found: YES 3.1.1 00:04:58.680 Run-time dependency libpcap found: YES 1.10.4 00:04:58.680 Has header "pcap.h" with dependency libpcap: YES 00:04:58.680 Compiler for C supports arguments -Wcast-qual: YES 00:04:58.680 Compiler for C supports arguments -Wdeprecated: YES 00:04:58.680 Compiler for C supports arguments -Wformat: YES 00:04:58.680 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:58.680 Compiler for C supports arguments -Wformat-security: NO 00:04:58.680 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:58.680 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:58.680 Compiler for C supports arguments -Wnested-externs: YES 00:04:58.680 Compiler for C supports arguments -Wold-style-definition: YES 00:04:58.680 Compiler for C supports arguments -Wpointer-arith: YES 00:04:58.680 Compiler for C supports arguments -Wsign-compare: YES 00:04:58.680 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:58.680 Compiler for C supports arguments -Wundef: YES 00:04:58.680 Compiler for C supports arguments -Wwrite-strings: YES 00:04:58.680 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:58.680 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:58.680 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:58.680 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:58.680 Program objdump found: YES (/usr/bin/objdump) 00:04:58.680 Compiler for C supports arguments -mavx512f: YES 00:04:58.680 Checking if "AVX512 checking" compiles: YES 00:04:58.680 Fetching value of define "__SSE4_2__" : 1 00:04:58.680 Fetching value of define "__AES__" : 1 00:04:58.680 Fetching value of define "__AVX__" : 1 00:04:58.680 Fetching value of define "__AVX2__" : (undefined) 00:04:58.680 Fetching value of define "__AVX512BW__" : (undefined) 00:04:58.680 Fetching value of define "__AVX512CD__" : (undefined) 00:04:58.680 Fetching value of define "__AVX512DQ__" : (undefined) 00:04:58.680 Fetching value of define "__AVX512F__" : (undefined) 00:04:58.680 Fetching value of define "__AVX512VL__" : (undefined) 00:04:58.680 Fetching value of define "__PCLMUL__" : 1 00:04:58.680 Fetching value of define "__RDRND__" : 1 00:04:58.680 Fetching value of define "__RDSEED__" : (undefined) 00:04:58.680 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:58.680 Fetching value of define "__znver1__" : (undefined) 00:04:58.680 Fetching value of define "__znver2__" : (undefined) 00:04:58.680 Fetching value of define "__znver3__" : (undefined) 00:04:58.680 Fetching value of define "__znver4__" : (undefined) 00:04:58.680 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:58.680 Message: lib/log: Defining dependency "log" 00:04:58.680 Message: lib/kvargs: Defining dependency "kvargs" 00:04:58.680 Message: lib/telemetry: Defining dependency "telemetry" 00:04:58.680 Checking for function "getentropy" : NO 00:04:58.680 Message: lib/eal: Defining dependency "eal" 00:04:58.680 Message: lib/ring: Defining dependency "ring" 00:04:58.680 Message: lib/rcu: Defining dependency "rcu" 00:04:58.680 Message: lib/mempool: Defining dependency "mempool" 00:04:58.680 Message: lib/mbuf: Defining dependency "mbuf" 00:04:58.680 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:58.680 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:04:58.680 Compiler for C supports arguments -mpclmul: YES 00:04:58.680 Compiler for C supports arguments -maes: YES 00:04:58.680 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:58.680 Compiler for C supports arguments -mavx512bw: YES 00:04:58.680 Compiler for C supports arguments -mavx512dq: YES 00:04:58.680 Compiler for C supports arguments -mavx512vl: YES 00:04:58.680 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:58.680 Compiler for C supports arguments -mavx2: YES 00:04:58.680 Compiler for C supports arguments -mavx: YES 00:04:58.680 Message: lib/net: Defining dependency "net" 00:04:58.680 Message: lib/meter: Defining dependency "meter" 00:04:58.680 Message: lib/ethdev: Defining dependency "ethdev" 00:04:58.680 Message: lib/pci: Defining dependency "pci" 00:04:58.680 Message: lib/cmdline: Defining dependency "cmdline" 00:04:58.680 Message: lib/hash: Defining dependency "hash" 00:04:58.680 Message: lib/timer: Defining dependency "timer" 00:04:58.680 Message: lib/compressdev: Defining dependency "compressdev" 00:04:58.680 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:58.680 Message: lib/dmadev: Defining dependency "dmadev" 00:04:58.680 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:58.680 Message: lib/power: Defining dependency "power" 00:04:58.680 Message: lib/reorder: Defining dependency "reorder" 00:04:58.680 Message: lib/security: Defining dependency "security" 00:04:58.680 Has header "linux/userfaultfd.h" : YES 00:04:58.680 Has header "linux/vduse.h" : YES 00:04:58.680 Message: lib/vhost: Defining dependency "vhost" 00:04:58.680 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:58.680 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:58.680 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:58.680 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:58.680 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:58.680 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:58.680 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:58.680 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:58.680 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:58.680 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:58.680 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:58.680 Configuring doxy-api-html.conf using configuration 00:04:58.680 Configuring doxy-api-man.conf using configuration 00:04:58.680 Program mandb found: YES (/usr/bin/mandb) 00:04:58.680 Program sphinx-build found: NO 00:04:58.680 Configuring rte_build_config.h using configuration 00:04:58.680 Message: 00:04:58.680 ================= 00:04:58.680 Applications Enabled 00:04:58.680 ================= 00:04:58.680 00:04:58.680 apps: 00:04:58.680 00:04:58.680 00:04:58.680 Message: 00:04:58.680 ================= 00:04:58.680 Libraries Enabled 00:04:58.680 ================= 00:04:58.680 00:04:58.680 libs: 00:04:58.680 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:58.680 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:58.680 cryptodev, dmadev, power, reorder, security, vhost, 00:04:58.680 00:04:58.680 Message: 00:04:58.680 =============== 00:04:58.680 Drivers Enabled 00:04:58.680 =============== 00:04:58.680 00:04:58.680 common: 00:04:58.680 00:04:58.680 bus: 00:04:58.680 pci, vdev, 00:04:58.680 mempool: 00:04:58.680 ring, 00:04:58.680 dma: 00:04:58.680 00:04:58.680 net: 00:04:58.680 00:04:58.680 crypto: 00:04:58.680 00:04:58.680 compress: 00:04:58.680 00:04:58.680 vdpa: 00:04:58.680 00:04:58.680 00:04:58.680 Message: 00:04:58.680 ================= 00:04:58.680 Content Skipped 00:04:58.680 ================= 00:04:58.681 00:04:58.681 apps: 00:04:58.681 dumpcap: explicitly disabled via build config 00:04:58.681 graph: explicitly disabled via build config 00:04:58.681 pdump: explicitly disabled via build config 00:04:58.681 proc-info: explicitly disabled via build config 00:04:58.681 test-acl: explicitly disabled via build config 00:04:58.681 test-bbdev: explicitly disabled via build config 00:04:58.681 test-cmdline: explicitly disabled via build config 00:04:58.681 test-compress-perf: explicitly disabled via build config 00:04:58.681 test-crypto-perf: explicitly disabled via build config 00:04:58.681 test-dma-perf: explicitly disabled via build config 00:04:58.681 test-eventdev: explicitly disabled via build config 00:04:58.681 test-fib: explicitly disabled via build config 00:04:58.681 test-flow-perf: explicitly disabled via build config 00:04:58.681 test-gpudev: explicitly disabled via build config 00:04:58.681 test-mldev: explicitly disabled via build config 00:04:58.681 test-pipeline: explicitly disabled via build config 00:04:58.681 test-pmd: explicitly disabled via build config 00:04:58.681 test-regex: explicitly disabled via build config 00:04:58.681 test-sad: explicitly disabled via build config 00:04:58.681 test-security-perf: explicitly disabled via build config 00:04:58.681 00:04:58.681 libs: 00:04:58.681 argparse: explicitly disabled via build config 00:04:58.681 metrics: explicitly disabled via build config 00:04:58.681 acl: explicitly disabled via build config 00:04:58.681 bbdev: explicitly disabled via build config 00:04:58.681 bitratestats: explicitly disabled via build config 00:04:58.681 bpf: explicitly disabled via build config 00:04:58.681 cfgfile: explicitly disabled via build config 00:04:58.681 distributor: explicitly disabled via build config 00:04:58.681 efd: explicitly disabled via build config 00:04:58.681 eventdev: explicitly disabled via build config 00:04:58.681 dispatcher: explicitly disabled via build config 00:04:58.681 gpudev: explicitly disabled via build config 00:04:58.681 gro: explicitly disabled via build config 00:04:58.681 gso: explicitly disabled via build config 00:04:58.681 ip_frag: explicitly disabled via build config 00:04:58.681 jobstats: explicitly disabled via build config 00:04:58.681 latencystats: explicitly disabled via build config 00:04:58.681 lpm: explicitly disabled via build config 00:04:58.681 member: explicitly disabled via build config 00:04:58.681 pcapng: explicitly disabled via build config 00:04:58.681 rawdev: explicitly disabled via build config 00:04:58.681 regexdev: explicitly disabled via build config 00:04:58.681 mldev: explicitly disabled via build config 00:04:58.681 rib: explicitly disabled via build config 00:04:58.681 sched: explicitly disabled via build config 00:04:58.681 stack: explicitly disabled via build config 00:04:58.681 ipsec: explicitly disabled via build config 00:04:58.681 pdcp: explicitly disabled via build config 00:04:58.681 fib: explicitly disabled via build config 00:04:58.681 port: explicitly disabled via build config 00:04:58.681 pdump: explicitly disabled via build config 00:04:58.681 table: explicitly disabled via build config 00:04:58.681 pipeline: explicitly disabled via build config 00:04:58.681 graph: explicitly disabled via build config 00:04:58.681 node: explicitly disabled via build config 00:04:58.681 00:04:58.681 drivers: 00:04:58.681 common/cpt: not in enabled drivers build config 00:04:58.681 common/dpaax: not in enabled drivers build config 00:04:58.681 common/iavf: not in enabled drivers build config 00:04:58.681 common/idpf: not in enabled drivers build config 00:04:58.681 common/ionic: not in enabled drivers build config 00:04:58.681 common/mvep: not in enabled drivers build config 00:04:58.681 common/octeontx: not in enabled drivers build config 00:04:58.681 bus/auxiliary: not in enabled drivers build config 00:04:58.681 bus/cdx: not in enabled drivers build config 00:04:58.681 bus/dpaa: not in enabled drivers build config 00:04:58.681 bus/fslmc: not in enabled drivers build config 00:04:58.681 bus/ifpga: not in enabled drivers build config 00:04:58.681 bus/platform: not in enabled drivers build config 00:04:58.681 bus/uacce: not in enabled drivers build config 00:04:58.681 bus/vmbus: not in enabled drivers build config 00:04:58.681 common/cnxk: not in enabled drivers build config 00:04:58.681 common/mlx5: not in enabled drivers build config 00:04:58.681 common/nfp: not in enabled drivers build config 00:04:58.681 common/nitrox: not in enabled drivers build config 00:04:58.681 common/qat: not in enabled drivers build config 00:04:58.681 common/sfc_efx: not in enabled drivers build config 00:04:58.681 mempool/bucket: not in enabled drivers build config 00:04:58.681 mempool/cnxk: not in enabled drivers build config 00:04:58.681 mempool/dpaa: not in enabled drivers build config 00:04:58.681 mempool/dpaa2: not in enabled drivers build config 00:04:58.681 mempool/octeontx: not in enabled drivers build config 00:04:58.681 mempool/stack: not in enabled drivers build config 00:04:58.681 dma/cnxk: not in enabled drivers build config 00:04:58.681 dma/dpaa: not in enabled drivers build config 00:04:58.681 dma/dpaa2: not in enabled drivers build config 00:04:58.681 dma/hisilicon: not in enabled drivers build config 00:04:58.681 dma/idxd: not in enabled drivers build config 00:04:58.681 dma/ioat: not in enabled drivers build config 00:04:58.681 dma/skeleton: not in enabled drivers build config 00:04:58.681 net/af_packet: not in enabled drivers build config 00:04:58.681 net/af_xdp: not in enabled drivers build config 00:04:58.681 net/ark: not in enabled drivers build config 00:04:58.681 net/atlantic: not in enabled drivers build config 00:04:58.681 net/avp: not in enabled drivers build config 00:04:58.681 net/axgbe: not in enabled drivers build config 00:04:58.681 net/bnx2x: not in enabled drivers build config 00:04:58.681 net/bnxt: not in enabled drivers build config 00:04:58.681 net/bonding: not in enabled drivers build config 00:04:58.681 net/cnxk: not in enabled drivers build config 00:04:58.681 net/cpfl: not in enabled drivers build config 00:04:58.681 net/cxgbe: not in enabled drivers build config 00:04:58.681 net/dpaa: not in enabled drivers build config 00:04:58.681 net/dpaa2: not in enabled drivers build config 00:04:58.681 net/e1000: not in enabled drivers build config 00:04:58.681 net/ena: not in enabled drivers build config 00:04:58.681 net/enetc: not in enabled drivers build config 00:04:58.681 net/enetfec: not in enabled drivers build config 00:04:58.681 net/enic: not in enabled drivers build config 00:04:58.681 net/failsafe: not in enabled drivers build config 00:04:58.681 net/fm10k: not in enabled drivers build config 00:04:58.681 net/gve: not in enabled drivers build config 00:04:58.681 net/hinic: not in enabled drivers build config 00:04:58.681 net/hns3: not in enabled drivers build config 00:04:58.681 net/i40e: not in enabled drivers build config 00:04:58.681 net/iavf: not in enabled drivers build config 00:04:58.681 net/ice: not in enabled drivers build config 00:04:58.681 net/idpf: not in enabled drivers build config 00:04:58.681 net/igc: not in enabled drivers build config 00:04:58.681 net/ionic: not in enabled drivers build config 00:04:58.681 net/ipn3ke: not in enabled drivers build config 00:04:58.681 net/ixgbe: not in enabled drivers build config 00:04:58.681 net/mana: not in enabled drivers build config 00:04:58.681 net/memif: not in enabled drivers build config 00:04:58.681 net/mlx4: not in enabled drivers build config 00:04:58.681 net/mlx5: not in enabled drivers build config 00:04:58.681 net/mvneta: not in enabled drivers build config 00:04:58.681 net/mvpp2: not in enabled drivers build config 00:04:58.681 net/netvsc: not in enabled drivers build config 00:04:58.681 net/nfb: not in enabled drivers build config 00:04:58.681 net/nfp: not in enabled drivers build config 00:04:58.681 net/ngbe: not in enabled drivers build config 00:04:58.681 net/null: not in enabled drivers build config 00:04:58.681 net/octeontx: not in enabled drivers build config 00:04:58.681 net/octeon_ep: not in enabled drivers build config 00:04:58.681 net/pcap: not in enabled drivers build config 00:04:58.681 net/pfe: not in enabled drivers build config 00:04:58.681 net/qede: not in enabled drivers build config 00:04:58.681 net/ring: not in enabled drivers build config 00:04:58.681 net/sfc: not in enabled drivers build config 00:04:58.681 net/softnic: not in enabled drivers build config 00:04:58.681 net/tap: not in enabled drivers build config 00:04:58.681 net/thunderx: not in enabled drivers build config 00:04:58.681 net/txgbe: not in enabled drivers build config 00:04:58.681 net/vdev_netvsc: not in enabled drivers build config 00:04:58.681 net/vhost: not in enabled drivers build config 00:04:58.681 net/virtio: not in enabled drivers build config 00:04:58.681 net/vmxnet3: not in enabled drivers build config 00:04:58.681 raw/*: missing internal dependency, "rawdev" 00:04:58.681 crypto/armv8: not in enabled drivers build config 00:04:58.681 crypto/bcmfs: not in enabled drivers build config 00:04:58.681 crypto/caam_jr: not in enabled drivers build config 00:04:58.681 crypto/ccp: not in enabled drivers build config 00:04:58.681 crypto/cnxk: not in enabled drivers build config 00:04:58.681 crypto/dpaa_sec: not in enabled drivers build config 00:04:58.681 crypto/dpaa2_sec: not in enabled drivers build config 00:04:58.682 crypto/ipsec_mb: not in enabled drivers build config 00:04:58.682 crypto/mlx5: not in enabled drivers build config 00:04:58.682 crypto/mvsam: not in enabled drivers build config 00:04:58.682 crypto/nitrox: not in enabled drivers build config 00:04:58.682 crypto/null: not in enabled drivers build config 00:04:58.682 crypto/octeontx: not in enabled drivers build config 00:04:58.682 crypto/openssl: not in enabled drivers build config 00:04:58.682 crypto/scheduler: not in enabled drivers build config 00:04:58.682 crypto/uadk: not in enabled drivers build config 00:04:58.682 crypto/virtio: not in enabled drivers build config 00:04:58.682 compress/isal: not in enabled drivers build config 00:04:58.682 compress/mlx5: not in enabled drivers build config 00:04:58.682 compress/nitrox: not in enabled drivers build config 00:04:58.682 compress/octeontx: not in enabled drivers build config 00:04:58.682 compress/zlib: not in enabled drivers build config 00:04:58.682 regex/*: missing internal dependency, "regexdev" 00:04:58.682 ml/*: missing internal dependency, "mldev" 00:04:58.682 vdpa/ifc: not in enabled drivers build config 00:04:58.682 vdpa/mlx5: not in enabled drivers build config 00:04:58.682 vdpa/nfp: not in enabled drivers build config 00:04:58.682 vdpa/sfc: not in enabled drivers build config 00:04:58.682 event/*: missing internal dependency, "eventdev" 00:04:58.682 baseband/*: missing internal dependency, "bbdev" 00:04:58.682 gpu/*: missing internal dependency, "gpudev" 00:04:58.682 00:04:58.682 00:04:58.682 Build targets in project: 85 00:04:58.682 00:04:58.682 DPDK 24.03.0 00:04:58.682 00:04:58.682 User defined options 00:04:58.682 buildtype : debug 00:04:58.682 default_library : shared 00:04:58.682 libdir : lib 00:04:58.682 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:58.682 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:58.682 c_link_args : 00:04:58.682 cpu_instruction_set: native 00:04:58.682 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:04:58.682 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:04:58.682 enable_docs : false 00:04:58.682 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:58.682 enable_kmods : false 00:04:58.682 max_lcores : 128 00:04:58.682 tests : false 00:04:58.682 00:04:58.682 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:58.682 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:58.682 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:58.682 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:58.682 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:58.682 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:58.943 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:58.943 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:58.943 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:58.943 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:58.943 [9/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:58.943 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:58.943 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:58.943 [12/268] Linking static target lib/librte_kvargs.a 00:04:58.943 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:58.943 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:58.943 [15/268] Linking static target lib/librte_log.a 00:04:58.943 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:59.517 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:59.517 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:59.777 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:59.777 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:59.777 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:59.777 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:59.777 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:59.777 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:59.777 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:59.777 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:59.777 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:59.777 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:59.777 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:59.777 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:59.777 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:59.777 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:59.777 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:59.777 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:59.777 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:59.777 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:59.777 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:59.777 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:59.777 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:59.777 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:59.777 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:59.777 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:59.777 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:59.777 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:59.777 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:59.777 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:59.777 [47/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:59.777 [48/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:59.777 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:59.777 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:59.777 [51/268] Linking static target lib/librte_telemetry.a 00:04:59.777 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:59.778 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:59.778 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:59.778 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:59.778 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:00.040 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:00.040 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:00.040 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:00.040 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:00.040 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:00.040 [62/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.040 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:00.040 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:00.040 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:00.040 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:00.040 [67/268] Linking target lib/librte_log.so.24.1 00:05:00.317 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:00.317 [69/268] Linking static target lib/librte_pci.a 00:05:00.317 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:00.317 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:00.317 [72/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:00.580 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:00.580 [74/268] Linking target lib/librte_kvargs.so.24.1 00:05:00.580 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:00.580 [76/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:00.580 [77/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:00.580 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:00.580 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:00.580 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:00.580 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:00.580 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:00.580 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:00.580 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:00.849 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:00.849 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:00.849 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:00.849 [88/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:00.849 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:00.849 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:00.849 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:00.849 [92/268] Linking static target lib/librte_ring.a 00:05:00.849 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:00.849 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:00.850 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:00.850 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:00.850 [97/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:00.850 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:00.850 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:00.850 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:00.850 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:00.850 [102/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:00.850 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:00.850 [104/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:00.850 [105/268] Linking static target lib/librte_meter.a 00:05:00.850 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:00.850 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:00.850 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:00.850 [109/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.850 [110/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:00.850 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:00.850 [112/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:01.108 [113/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:01.108 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:01.108 [115/268] Linking static target lib/librte_eal.a 00:05:01.108 [116/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:01.108 [117/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:01.108 [118/268] Linking target lib/librte_telemetry.so.24.1 00:05:01.108 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:01.108 [120/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:01.108 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:01.108 [122/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:01.108 [123/268] Linking static target lib/librte_rcu.a 00:05:01.108 [124/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:01.108 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:01.108 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:01.108 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:01.108 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:01.108 [129/268] Linking static target lib/librte_mempool.a 00:05:01.108 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:01.370 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:01.370 [132/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:01.370 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:01.370 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:01.370 [135/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:01.370 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:01.370 [137/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:01.370 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:01.370 [139/268] Linking static target lib/librte_net.a 00:05:01.370 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:01.636 [141/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.636 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.636 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:01.636 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:01.636 [145/268] Linking static target lib/librte_cmdline.a 00:05:01.636 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:01.636 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:01.636 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:01.636 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:01.636 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.903 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:01.903 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:01.903 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:01.903 [154/268] Linking static target lib/librte_timer.a 00:05:01.903 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:01.903 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:01.903 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:01.903 [158/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:01.903 [159/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:01.903 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:01.903 [161/268] Linking static target lib/librte_dmadev.a 00:05:01.903 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:01.903 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:02.161 [164/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:02.161 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:02.161 [166/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:02.161 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:02.161 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:02.161 [169/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.161 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:02.161 [171/268] Linking static target lib/librte_power.a 00:05:02.161 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.161 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:02.161 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:02.161 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:02.420 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:02.420 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:02.420 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:02.420 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:02.420 [180/268] Linking static target lib/librte_compressdev.a 00:05:02.420 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:02.420 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:02.420 [183/268] Linking static target lib/librte_hash.a 00:05:02.420 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:02.420 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:02.420 [186/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:02.420 [187/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:02.420 [188/268] Linking static target lib/librte_mbuf.a 00:05:02.420 [189/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:02.420 [190/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.678 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.678 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:02.678 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:02.678 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:02.678 [195/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:02.678 [196/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:02.678 [197/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:02.678 [198/268] Linking static target lib/librte_reorder.a 00:05:02.678 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:02.678 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:02.678 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:02.678 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:02.678 [203/268] Linking static target drivers/librte_bus_vdev.a 00:05:02.678 [204/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:02.678 [205/268] Linking static target lib/librte_security.a 00:05:02.678 [206/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.678 [207/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:02.678 [208/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.936 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:02.936 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:02.936 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:02.936 [212/268] Linking static target drivers/librte_mempool_ring.a 00:05:02.936 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:02.936 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:02.936 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:02.936 [216/268] Linking static target drivers/librte_bus_pci.a 00:05:02.936 [217/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.936 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.936 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:02.936 [220/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.193 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:03.193 [222/268] Linking static target lib/librte_ethdev.a 00:05:03.193 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:03.193 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:03.193 [225/268] Linking static target lib/librte_cryptodev.a 00:05:03.451 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:04.384 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:05.759 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:07.660 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.660 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:07.660 [231/268] Linking target lib/librte_eal.so.24.1 00:05:07.660 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:07.660 [233/268] Linking target lib/librte_ring.so.24.1 00:05:07.660 [234/268] Linking target lib/librte_timer.so.24.1 00:05:07.660 [235/268] Linking target lib/librte_meter.so.24.1 00:05:07.660 [236/268] Linking target lib/librte_pci.so.24.1 00:05:07.660 [237/268] Linking target lib/librte_dmadev.so.24.1 00:05:07.660 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:07.917 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:07.917 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:07.917 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:07.917 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:07.917 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:07.917 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:07.917 [245/268] Linking target lib/librte_rcu.so.24.1 00:05:07.917 [246/268] Linking target lib/librte_mempool.so.24.1 00:05:07.917 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:07.917 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:07.917 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:07.917 [250/268] Linking target lib/librte_mbuf.so.24.1 00:05:08.175 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:08.175 [252/268] Linking target lib/librte_reorder.so.24.1 00:05:08.175 [253/268] Linking target lib/librte_compressdev.so.24.1 00:05:08.175 [254/268] Linking target lib/librte_net.so.24.1 00:05:08.175 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:05:08.432 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:08.432 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:08.432 [258/268] Linking target lib/librte_cmdline.so.24.1 00:05:08.432 [259/268] Linking target lib/librte_security.so.24.1 00:05:08.432 [260/268] Linking target lib/librte_hash.so.24.1 00:05:08.432 [261/268] Linking target lib/librte_ethdev.so.24.1 00:05:08.432 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:08.432 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:08.689 [264/268] Linking target lib/librte_power.so.24.1 00:05:11.215 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:11.215 [266/268] Linking static target lib/librte_vhost.a 00:05:12.588 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:12.588 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:12.588 INFO: autodetecting backend as ninja 00:05:12.588 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:05:34.507 CC lib/ut/ut.o 00:05:34.507 CC lib/log/log.o 00:05:34.507 CC lib/log/log_flags.o 00:05:34.507 CC lib/log/log_deprecated.o 00:05:34.507 CC lib/ut_mock/mock.o 00:05:34.507 LIB libspdk_ut.a 00:05:34.507 LIB libspdk_log.a 00:05:34.507 LIB libspdk_ut_mock.a 00:05:34.507 SO libspdk_ut.so.2.0 00:05:34.507 SO libspdk_log.so.7.1 00:05:34.507 SO libspdk_ut_mock.so.6.0 00:05:34.507 SYMLINK libspdk_ut.so 00:05:34.507 SYMLINK libspdk_ut_mock.so 00:05:34.507 SYMLINK libspdk_log.so 00:05:34.507 CC lib/dma/dma.o 00:05:34.507 CC lib/ioat/ioat.o 00:05:34.507 CXX lib/trace_parser/trace.o 00:05:34.507 CC lib/util/base64.o 00:05:34.507 CC lib/util/bit_array.o 00:05:34.507 CC lib/util/cpuset.o 00:05:34.507 CC lib/util/crc16.o 00:05:34.507 CC lib/util/crc32.o 00:05:34.507 CC lib/util/crc32c.o 00:05:34.507 CC lib/util/crc32_ieee.o 00:05:34.507 CC lib/util/crc64.o 00:05:34.507 CC lib/util/dif.o 00:05:34.507 CC lib/util/fd.o 00:05:34.507 CC lib/util/fd_group.o 00:05:34.507 CC lib/util/file.o 00:05:34.507 CC lib/util/hexlify.o 00:05:34.507 CC lib/util/iov.o 00:05:34.507 CC lib/util/math.o 00:05:34.507 CC lib/util/net.o 00:05:34.507 CC lib/util/pipe.o 00:05:34.507 CC lib/util/strerror_tls.o 00:05:34.507 CC lib/util/string.o 00:05:34.507 CC lib/util/uuid.o 00:05:34.507 CC lib/util/xor.o 00:05:34.507 CC lib/util/md5.o 00:05:34.507 CC lib/util/zipf.o 00:05:34.507 CC lib/vfio_user/host/vfio_user_pci.o 00:05:34.507 CC lib/vfio_user/host/vfio_user.o 00:05:34.507 LIB libspdk_dma.a 00:05:34.507 SO libspdk_dma.so.5.0 00:05:34.507 SYMLINK libspdk_dma.so 00:05:34.507 LIB libspdk_ioat.a 00:05:34.507 SO libspdk_ioat.so.7.0 00:05:34.507 SYMLINK libspdk_ioat.so 00:05:34.507 LIB libspdk_vfio_user.a 00:05:34.507 SO libspdk_vfio_user.so.5.0 00:05:34.507 SYMLINK libspdk_vfio_user.so 00:05:34.507 LIB libspdk_util.a 00:05:34.507 SO libspdk_util.so.10.1 00:05:34.507 SYMLINK libspdk_util.so 00:05:34.507 CC lib/vmd/vmd.o 00:05:34.507 CC lib/env_dpdk/env.o 00:05:34.507 CC lib/rdma_utils/rdma_utils.o 00:05:34.507 CC lib/vmd/led.o 00:05:34.507 CC lib/json/json_parse.o 00:05:34.507 CC lib/conf/conf.o 00:05:34.507 CC lib/env_dpdk/memory.o 00:05:34.507 CC lib/env_dpdk/pci.o 00:05:34.507 CC lib/json/json_util.o 00:05:34.507 CC lib/idxd/idxd.o 00:05:34.507 CC lib/json/json_write.o 00:05:34.507 CC lib/env_dpdk/init.o 00:05:34.507 CC lib/idxd/idxd_user.o 00:05:34.507 CC lib/env_dpdk/threads.o 00:05:34.507 CC lib/idxd/idxd_kernel.o 00:05:34.507 CC lib/env_dpdk/pci_ioat.o 00:05:34.507 CC lib/env_dpdk/pci_virtio.o 00:05:34.507 CC lib/env_dpdk/pci_vmd.o 00:05:34.507 CC lib/env_dpdk/pci_idxd.o 00:05:34.507 CC lib/env_dpdk/pci_event.o 00:05:34.507 CC lib/env_dpdk/sigbus_handler.o 00:05:34.507 CC lib/env_dpdk/pci_dpdk.o 00:05:34.507 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:34.507 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:34.507 LIB libspdk_trace_parser.a 00:05:34.507 SO libspdk_trace_parser.so.6.0 00:05:34.507 SYMLINK libspdk_trace_parser.so 00:05:34.507 LIB libspdk_conf.a 00:05:34.507 SO libspdk_conf.so.6.0 00:05:34.507 SYMLINK libspdk_conf.so 00:05:34.507 LIB libspdk_json.a 00:05:34.507 SO libspdk_json.so.6.0 00:05:34.507 LIB libspdk_rdma_utils.a 00:05:34.507 SO libspdk_rdma_utils.so.1.0 00:05:34.507 SYMLINK libspdk_json.so 00:05:34.507 SYMLINK libspdk_rdma_utils.so 00:05:34.507 CC lib/jsonrpc/jsonrpc_server.o 00:05:34.507 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:34.507 CC lib/jsonrpc/jsonrpc_client.o 00:05:34.507 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:34.507 CC lib/rdma_provider/common.o 00:05:34.507 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:34.507 LIB libspdk_idxd.a 00:05:34.507 SO libspdk_idxd.so.12.1 00:05:34.507 LIB libspdk_vmd.a 00:05:34.507 SYMLINK libspdk_idxd.so 00:05:34.507 SO libspdk_vmd.so.6.0 00:05:34.507 SYMLINK libspdk_vmd.so 00:05:34.507 LIB libspdk_rdma_provider.a 00:05:34.507 LIB libspdk_jsonrpc.a 00:05:34.507 SO libspdk_rdma_provider.so.7.0 00:05:34.508 SO libspdk_jsonrpc.so.6.0 00:05:34.508 SYMLINK libspdk_rdma_provider.so 00:05:34.508 SYMLINK libspdk_jsonrpc.so 00:05:34.508 CC lib/rpc/rpc.o 00:05:34.508 LIB libspdk_rpc.a 00:05:34.508 SO libspdk_rpc.so.6.0 00:05:34.508 SYMLINK libspdk_rpc.so 00:05:34.508 CC lib/trace/trace.o 00:05:34.508 CC lib/trace/trace_flags.o 00:05:34.508 CC lib/keyring/keyring.o 00:05:34.508 CC lib/notify/notify.o 00:05:34.508 CC lib/trace/trace_rpc.o 00:05:34.508 CC lib/keyring/keyring_rpc.o 00:05:34.508 CC lib/notify/notify_rpc.o 00:05:34.765 LIB libspdk_notify.a 00:05:34.765 SO libspdk_notify.so.6.0 00:05:34.765 SYMLINK libspdk_notify.so 00:05:34.765 LIB libspdk_keyring.a 00:05:34.765 SO libspdk_keyring.so.2.0 00:05:34.765 LIB libspdk_trace.a 00:05:34.765 SO libspdk_trace.so.11.0 00:05:34.765 SYMLINK libspdk_keyring.so 00:05:34.765 SYMLINK libspdk_trace.so 00:05:35.023 LIB libspdk_env_dpdk.a 00:05:35.023 CC lib/sock/sock.o 00:05:35.023 CC lib/thread/thread.o 00:05:35.023 CC lib/sock/sock_rpc.o 00:05:35.023 CC lib/thread/iobuf.o 00:05:35.023 SO libspdk_env_dpdk.so.15.1 00:05:35.283 SYMLINK libspdk_env_dpdk.so 00:05:35.283 LIB libspdk_sock.a 00:05:35.556 SO libspdk_sock.so.10.0 00:05:35.556 SYMLINK libspdk_sock.so 00:05:35.556 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:35.556 CC lib/nvme/nvme_ctrlr.o 00:05:35.556 CC lib/nvme/nvme_fabric.o 00:05:35.556 CC lib/nvme/nvme_ns_cmd.o 00:05:35.556 CC lib/nvme/nvme_ns.o 00:05:35.556 CC lib/nvme/nvme_pcie_common.o 00:05:35.556 CC lib/nvme/nvme_pcie.o 00:05:35.556 CC lib/nvme/nvme_qpair.o 00:05:35.556 CC lib/nvme/nvme.o 00:05:35.556 CC lib/nvme/nvme_quirks.o 00:05:35.556 CC lib/nvme/nvme_transport.o 00:05:35.556 CC lib/nvme/nvme_discovery.o 00:05:35.556 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:35.556 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:35.556 CC lib/nvme/nvme_tcp.o 00:05:35.556 CC lib/nvme/nvme_opal.o 00:05:35.557 CC lib/nvme/nvme_io_msg.o 00:05:35.557 CC lib/nvme/nvme_poll_group.o 00:05:35.557 CC lib/nvme/nvme_zns.o 00:05:35.557 CC lib/nvme/nvme_stubs.o 00:05:35.557 CC lib/nvme/nvme_auth.o 00:05:35.557 CC lib/nvme/nvme_cuse.o 00:05:35.557 CC lib/nvme/nvme_vfio_user.o 00:05:35.557 CC lib/nvme/nvme_rdma.o 00:05:36.493 LIB libspdk_thread.a 00:05:36.751 SO libspdk_thread.so.11.0 00:05:36.751 SYMLINK libspdk_thread.so 00:05:36.751 CC lib/virtio/virtio.o 00:05:36.751 CC lib/vfu_tgt/tgt_endpoint.o 00:05:36.751 CC lib/fsdev/fsdev.o 00:05:36.751 CC lib/accel/accel.o 00:05:36.751 CC lib/virtio/virtio_vhost_user.o 00:05:36.751 CC lib/accel/accel_rpc.o 00:05:36.751 CC lib/vfu_tgt/tgt_rpc.o 00:05:36.751 CC lib/fsdev/fsdev_io.o 00:05:36.751 CC lib/init/json_config.o 00:05:36.751 CC lib/accel/accel_sw.o 00:05:36.751 CC lib/blob/blobstore.o 00:05:36.751 CC lib/virtio/virtio_vfio_user.o 00:05:36.751 CC lib/fsdev/fsdev_rpc.o 00:05:36.751 CC lib/init/subsystem.o 00:05:36.751 CC lib/blob/request.o 00:05:36.751 CC lib/virtio/virtio_pci.o 00:05:36.751 CC lib/init/subsystem_rpc.o 00:05:36.751 CC lib/blob/zeroes.o 00:05:36.751 CC lib/blob/blob_bs_dev.o 00:05:36.751 CC lib/init/rpc.o 00:05:37.317 LIB libspdk_init.a 00:05:37.317 SO libspdk_init.so.6.0 00:05:37.317 LIB libspdk_virtio.a 00:05:37.317 SYMLINK libspdk_init.so 00:05:37.317 LIB libspdk_vfu_tgt.a 00:05:37.317 SO libspdk_virtio.so.7.0 00:05:37.317 SO libspdk_vfu_tgt.so.3.0 00:05:37.317 SYMLINK libspdk_virtio.so 00:05:37.317 SYMLINK libspdk_vfu_tgt.so 00:05:37.317 CC lib/event/app.o 00:05:37.317 CC lib/event/reactor.o 00:05:37.317 CC lib/event/log_rpc.o 00:05:37.317 CC lib/event/app_rpc.o 00:05:37.317 CC lib/event/scheduler_static.o 00:05:37.573 LIB libspdk_fsdev.a 00:05:37.573 SO libspdk_fsdev.so.2.0 00:05:37.573 SYMLINK libspdk_fsdev.so 00:05:37.830 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:37.830 LIB libspdk_event.a 00:05:37.830 SO libspdk_event.so.14.0 00:05:38.088 SYMLINK libspdk_event.so 00:05:38.088 LIB libspdk_accel.a 00:05:38.088 SO libspdk_accel.so.16.0 00:05:38.088 LIB libspdk_nvme.a 00:05:38.088 SYMLINK libspdk_accel.so 00:05:38.345 SO libspdk_nvme.so.15.0 00:05:38.345 CC lib/bdev/bdev.o 00:05:38.345 CC lib/bdev/bdev_rpc.o 00:05:38.345 CC lib/bdev/bdev_zone.o 00:05:38.345 CC lib/bdev/part.o 00:05:38.345 CC lib/bdev/scsi_nvme.o 00:05:38.604 LIB libspdk_fuse_dispatcher.a 00:05:38.604 SYMLINK libspdk_nvme.so 00:05:38.604 SO libspdk_fuse_dispatcher.so.1.0 00:05:38.604 SYMLINK libspdk_fuse_dispatcher.so 00:05:40.035 LIB libspdk_blob.a 00:05:40.035 SO libspdk_blob.so.12.0 00:05:40.035 SYMLINK libspdk_blob.so 00:05:40.293 CC lib/lvol/lvol.o 00:05:40.293 CC lib/blobfs/blobfs.o 00:05:40.293 CC lib/blobfs/tree.o 00:05:41.231 LIB libspdk_bdev.a 00:05:41.231 SO libspdk_bdev.so.17.0 00:05:41.231 LIB libspdk_blobfs.a 00:05:41.231 SO libspdk_blobfs.so.11.0 00:05:41.231 SYMLINK libspdk_bdev.so 00:05:41.231 SYMLINK libspdk_blobfs.so 00:05:41.231 LIB libspdk_lvol.a 00:05:41.231 SO libspdk_lvol.so.11.0 00:05:41.231 SYMLINK libspdk_lvol.so 00:05:41.231 CC lib/scsi/dev.o 00:05:41.231 CC lib/nbd/nbd.o 00:05:41.231 CC lib/nvmf/ctrlr.o 00:05:41.231 CC lib/ublk/ublk.o 00:05:41.231 CC lib/scsi/lun.o 00:05:41.231 CC lib/nbd/nbd_rpc.o 00:05:41.231 CC lib/nvmf/ctrlr_discovery.o 00:05:41.231 CC lib/ublk/ublk_rpc.o 00:05:41.231 CC lib/scsi/port.o 00:05:41.231 CC lib/nvmf/ctrlr_bdev.o 00:05:41.231 CC lib/ftl/ftl_core.o 00:05:41.231 CC lib/scsi/scsi.o 00:05:41.231 CC lib/nvmf/subsystem.o 00:05:41.231 CC lib/ftl/ftl_init.o 00:05:41.231 CC lib/scsi/scsi_bdev.o 00:05:41.231 CC lib/ftl/ftl_layout.o 00:05:41.231 CC lib/scsi/scsi_pr.o 00:05:41.231 CC lib/nvmf/nvmf.o 00:05:41.231 CC lib/scsi/scsi_rpc.o 00:05:41.231 CC lib/nvmf/nvmf_rpc.o 00:05:41.231 CC lib/ftl/ftl_debug.o 00:05:41.231 CC lib/scsi/task.o 00:05:41.231 CC lib/ftl/ftl_io.o 00:05:41.231 CC lib/nvmf/transport.o 00:05:41.231 CC lib/ftl/ftl_l2p.o 00:05:41.231 CC lib/ftl/ftl_sb.o 00:05:41.231 CC lib/nvmf/tcp.o 00:05:41.231 CC lib/ftl/ftl_l2p_flat.o 00:05:41.231 CC lib/nvmf/stubs.o 00:05:41.231 CC lib/ftl/ftl_nv_cache.o 00:05:41.231 CC lib/nvmf/mdns_server.o 00:05:41.231 CC lib/nvmf/vfio_user.o 00:05:41.231 CC lib/nvmf/rdma.o 00:05:41.231 CC lib/ftl/ftl_band.o 00:05:41.231 CC lib/ftl/ftl_band_ops.o 00:05:41.231 CC lib/nvmf/auth.o 00:05:41.231 CC lib/ftl/ftl_writer.o 00:05:41.231 CC lib/ftl/ftl_rq.o 00:05:41.231 CC lib/ftl/ftl_reloc.o 00:05:41.231 CC lib/ftl/ftl_l2p_cache.o 00:05:41.231 CC lib/ftl/ftl_p2l.o 00:05:41.231 CC lib/ftl/ftl_p2l_log.o 00:05:41.231 CC lib/ftl/mngt/ftl_mngt.o 00:05:41.231 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:41.231 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:41.231 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:41.232 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:41.232 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:41.808 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:41.808 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:41.808 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:41.808 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:41.808 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:41.808 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:41.808 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:41.808 CC lib/ftl/utils/ftl_conf.o 00:05:41.808 CC lib/ftl/utils/ftl_md.o 00:05:41.808 CC lib/ftl/utils/ftl_mempool.o 00:05:41.808 CC lib/ftl/utils/ftl_bitmap.o 00:05:41.808 CC lib/ftl/utils/ftl_property.o 00:05:41.808 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:41.808 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:41.808 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:41.808 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:41.808 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:41.808 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:42.066 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:42.066 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:42.066 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:42.066 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:42.066 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:42.066 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:42.066 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:42.066 CC lib/ftl/base/ftl_base_dev.o 00:05:42.066 CC lib/ftl/base/ftl_base_bdev.o 00:05:42.066 CC lib/ftl/ftl_trace.o 00:05:42.066 LIB libspdk_nbd.a 00:05:42.066 SO libspdk_nbd.so.7.0 00:05:42.324 SYMLINK libspdk_nbd.so 00:05:42.324 LIB libspdk_scsi.a 00:05:42.324 SO libspdk_scsi.so.9.0 00:05:42.324 SYMLINK libspdk_scsi.so 00:05:42.324 LIB libspdk_ublk.a 00:05:42.582 SO libspdk_ublk.so.3.0 00:05:42.583 SYMLINK libspdk_ublk.so 00:05:42.583 CC lib/iscsi/conn.o 00:05:42.583 CC lib/vhost/vhost.o 00:05:42.583 CC lib/vhost/vhost_rpc.o 00:05:42.583 CC lib/iscsi/init_grp.o 00:05:42.583 CC lib/iscsi/iscsi.o 00:05:42.583 CC lib/vhost/vhost_scsi.o 00:05:42.583 CC lib/iscsi/param.o 00:05:42.583 CC lib/vhost/vhost_blk.o 00:05:42.583 CC lib/iscsi/portal_grp.o 00:05:42.583 CC lib/vhost/rte_vhost_user.o 00:05:42.583 CC lib/iscsi/tgt_node.o 00:05:42.583 CC lib/iscsi/iscsi_subsystem.o 00:05:42.583 CC lib/iscsi/iscsi_rpc.o 00:05:42.583 CC lib/iscsi/task.o 00:05:42.840 LIB libspdk_ftl.a 00:05:43.097 SO libspdk_ftl.so.9.0 00:05:43.363 SYMLINK libspdk_ftl.so 00:05:44.029 LIB libspdk_vhost.a 00:05:44.029 SO libspdk_vhost.so.8.0 00:05:44.029 SYMLINK libspdk_vhost.so 00:05:44.029 LIB libspdk_iscsi.a 00:05:44.029 LIB libspdk_nvmf.a 00:05:44.029 SO libspdk_iscsi.so.8.0 00:05:44.029 SO libspdk_nvmf.so.20.0 00:05:44.287 SYMLINK libspdk_iscsi.so 00:05:44.287 SYMLINK libspdk_nvmf.so 00:05:44.545 CC module/vfu_device/vfu_virtio.o 00:05:44.545 CC module/vfu_device/vfu_virtio_blk.o 00:05:44.545 CC module/env_dpdk/env_dpdk_rpc.o 00:05:44.545 CC module/vfu_device/vfu_virtio_scsi.o 00:05:44.545 CC module/vfu_device/vfu_virtio_rpc.o 00:05:44.545 CC module/vfu_device/vfu_virtio_fs.o 00:05:44.545 CC module/sock/posix/posix.o 00:05:44.545 CC module/accel/iaa/accel_iaa.o 00:05:44.545 CC module/keyring/file/keyring.o 00:05:44.545 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:44.545 CC module/accel/iaa/accel_iaa_rpc.o 00:05:44.545 CC module/keyring/file/keyring_rpc.o 00:05:44.545 CC module/fsdev/aio/fsdev_aio.o 00:05:44.545 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:44.545 CC module/accel/dsa/accel_dsa.o 00:05:44.545 CC module/accel/dsa/accel_dsa_rpc.o 00:05:44.545 CC module/accel/ioat/accel_ioat.o 00:05:44.545 CC module/accel/ioat/accel_ioat_rpc.o 00:05:44.545 CC module/fsdev/aio/linux_aio_mgr.o 00:05:44.545 CC module/accel/error/accel_error.o 00:05:44.545 CC module/blob/bdev/blob_bdev.o 00:05:44.545 CC module/scheduler/gscheduler/gscheduler.o 00:05:44.545 CC module/accel/error/accel_error_rpc.o 00:05:44.545 CC module/keyring/linux/keyring.o 00:05:44.545 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:44.545 CC module/keyring/linux/keyring_rpc.o 00:05:44.802 LIB libspdk_env_dpdk_rpc.a 00:05:44.802 SO libspdk_env_dpdk_rpc.so.6.0 00:05:44.802 SYMLINK libspdk_env_dpdk_rpc.so 00:05:44.802 LIB libspdk_scheduler_dpdk_governor.a 00:05:44.802 LIB libspdk_scheduler_gscheduler.a 00:05:44.802 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:44.802 SO libspdk_scheduler_gscheduler.so.4.0 00:05:44.802 LIB libspdk_scheduler_dynamic.a 00:05:44.802 LIB libspdk_accel_iaa.a 00:05:44.802 LIB libspdk_keyring_file.a 00:05:44.802 LIB libspdk_keyring_linux.a 00:05:44.802 SO libspdk_scheduler_dynamic.so.4.0 00:05:44.802 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:44.802 SO libspdk_keyring_file.so.2.0 00:05:44.802 SO libspdk_accel_iaa.so.3.0 00:05:44.802 SO libspdk_keyring_linux.so.1.0 00:05:45.059 SYMLINK libspdk_scheduler_gscheduler.so 00:05:45.059 LIB libspdk_accel_ioat.a 00:05:45.059 SYMLINK libspdk_scheduler_dynamic.so 00:05:45.059 LIB libspdk_blob_bdev.a 00:05:45.059 SYMLINK libspdk_keyring_file.so 00:05:45.059 SYMLINK libspdk_accel_iaa.so 00:05:45.059 SO libspdk_accel_ioat.so.6.0 00:05:45.059 SYMLINK libspdk_keyring_linux.so 00:05:45.059 LIB libspdk_accel_dsa.a 00:05:45.059 LIB libspdk_accel_error.a 00:05:45.059 SO libspdk_blob_bdev.so.12.0 00:05:45.059 SO libspdk_accel_dsa.so.5.0 00:05:45.059 SO libspdk_accel_error.so.2.0 00:05:45.059 SYMLINK libspdk_accel_ioat.so 00:05:45.059 SYMLINK libspdk_blob_bdev.so 00:05:45.059 SYMLINK libspdk_accel_dsa.so 00:05:45.059 SYMLINK libspdk_accel_error.so 00:05:45.332 LIB libspdk_vfu_device.a 00:05:45.332 SO libspdk_vfu_device.so.3.0 00:05:45.332 CC module/bdev/gpt/gpt.o 00:05:45.332 CC module/bdev/passthru/vbdev_passthru.o 00:05:45.332 CC module/bdev/lvol/vbdev_lvol.o 00:05:45.332 CC module/blobfs/bdev/blobfs_bdev.o 00:05:45.332 CC module/bdev/nvme/bdev_nvme.o 00:05:45.332 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:45.332 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:45.332 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:45.332 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:45.332 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:45.332 CC module/bdev/gpt/vbdev_gpt.o 00:05:45.332 CC module/bdev/nvme/nvme_rpc.o 00:05:45.332 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:45.332 CC module/bdev/null/bdev_null.o 00:05:45.332 CC module/bdev/aio/bdev_aio.o 00:05:45.332 CC module/bdev/nvme/bdev_mdns_client.o 00:05:45.332 CC module/bdev/aio/bdev_aio_rpc.o 00:05:45.332 CC module/bdev/split/vbdev_split.o 00:05:45.332 CC module/bdev/nvme/vbdev_opal.o 00:05:45.332 CC module/bdev/null/bdev_null_rpc.o 00:05:45.332 CC module/bdev/raid/bdev_raid.o 00:05:45.332 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:45.332 CC module/bdev/split/vbdev_split_rpc.o 00:05:45.332 CC module/bdev/error/vbdev_error.o 00:05:45.332 CC module/bdev/ftl/bdev_ftl.o 00:05:45.332 CC module/bdev/malloc/bdev_malloc.o 00:05:45.332 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:45.332 CC module/bdev/delay/vbdev_delay.o 00:05:45.332 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:45.332 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:45.332 CC module/bdev/raid/bdev_raid_rpc.o 00:05:45.332 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:45.332 CC module/bdev/error/vbdev_error_rpc.o 00:05:45.332 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:45.332 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:45.332 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:45.332 CC module/bdev/raid/bdev_raid_sb.o 00:05:45.332 CC module/bdev/iscsi/bdev_iscsi.o 00:05:45.332 CC module/bdev/raid/raid0.o 00:05:45.332 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:45.332 CC module/bdev/raid/raid1.o 00:05:45.332 CC module/bdev/raid/concat.o 00:05:45.332 SYMLINK libspdk_vfu_device.so 00:05:45.332 LIB libspdk_fsdev_aio.a 00:05:45.590 SO libspdk_fsdev_aio.so.1.0 00:05:45.590 LIB libspdk_sock_posix.a 00:05:45.590 SYMLINK libspdk_fsdev_aio.so 00:05:45.590 SO libspdk_sock_posix.so.6.0 00:05:45.590 LIB libspdk_blobfs_bdev.a 00:05:45.590 SYMLINK libspdk_sock_posix.so 00:05:45.847 SO libspdk_blobfs_bdev.so.6.0 00:05:45.847 LIB libspdk_bdev_split.a 00:05:45.847 SYMLINK libspdk_blobfs_bdev.so 00:05:45.847 SO libspdk_bdev_split.so.6.0 00:05:45.847 LIB libspdk_bdev_passthru.a 00:05:45.847 LIB libspdk_bdev_gpt.a 00:05:45.847 LIB libspdk_bdev_null.a 00:05:45.847 LIB libspdk_bdev_error.a 00:05:45.847 SO libspdk_bdev_passthru.so.6.0 00:05:45.847 SO libspdk_bdev_gpt.so.6.0 00:05:45.847 SO libspdk_bdev_null.so.6.0 00:05:45.847 SO libspdk_bdev_error.so.6.0 00:05:45.847 SYMLINK libspdk_bdev_split.so 00:05:45.847 LIB libspdk_bdev_zone_block.a 00:05:45.847 LIB libspdk_bdev_ftl.a 00:05:45.847 SO libspdk_bdev_zone_block.so.6.0 00:05:45.847 SYMLINK libspdk_bdev_passthru.so 00:05:45.847 LIB libspdk_bdev_delay.a 00:05:45.847 SYMLINK libspdk_bdev_gpt.so 00:05:45.847 SYMLINK libspdk_bdev_null.so 00:05:45.847 SO libspdk_bdev_ftl.so.6.0 00:05:45.847 SYMLINK libspdk_bdev_error.so 00:05:45.847 LIB libspdk_bdev_iscsi.a 00:05:45.847 LIB libspdk_bdev_aio.a 00:05:45.847 SO libspdk_bdev_delay.so.6.0 00:05:45.847 SO libspdk_bdev_iscsi.so.6.0 00:05:45.847 SYMLINK libspdk_bdev_zone_block.so 00:05:45.847 SO libspdk_bdev_aio.so.6.0 00:05:45.847 SYMLINK libspdk_bdev_ftl.so 00:05:45.847 LIB libspdk_bdev_malloc.a 00:05:46.104 SYMLINK libspdk_bdev_delay.so 00:05:46.104 SO libspdk_bdev_malloc.so.6.0 00:05:46.104 SYMLINK libspdk_bdev_iscsi.so 00:05:46.104 SYMLINK libspdk_bdev_aio.so 00:05:46.104 SYMLINK libspdk_bdev_malloc.so 00:05:46.104 LIB libspdk_bdev_virtio.a 00:05:46.104 LIB libspdk_bdev_lvol.a 00:05:46.104 SO libspdk_bdev_virtio.so.6.0 00:05:46.104 SO libspdk_bdev_lvol.so.6.0 00:05:46.104 SYMLINK libspdk_bdev_lvol.so 00:05:46.104 SYMLINK libspdk_bdev_virtio.so 00:05:46.418 LIB libspdk_bdev_raid.a 00:05:46.675 SO libspdk_bdev_raid.so.6.0 00:05:46.675 SYMLINK libspdk_bdev_raid.so 00:05:48.044 LIB libspdk_bdev_nvme.a 00:05:48.302 SO libspdk_bdev_nvme.so.7.1 00:05:48.302 SYMLINK libspdk_bdev_nvme.so 00:05:48.560 CC module/event/subsystems/keyring/keyring.o 00:05:48.560 CC module/event/subsystems/vmd/vmd.o 00:05:48.560 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:48.560 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:48.560 CC module/event/subsystems/iobuf/iobuf.o 00:05:48.560 CC module/event/subsystems/fsdev/fsdev.o 00:05:48.560 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:48.560 CC module/event/subsystems/sock/sock.o 00:05:48.560 CC module/event/subsystems/scheduler/scheduler.o 00:05:48.560 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:48.817 LIB libspdk_event_keyring.a 00:05:48.817 LIB libspdk_event_vhost_blk.a 00:05:48.817 LIB libspdk_event_fsdev.a 00:05:48.817 LIB libspdk_event_scheduler.a 00:05:48.817 LIB libspdk_event_vfu_tgt.a 00:05:48.817 LIB libspdk_event_vmd.a 00:05:48.817 LIB libspdk_event_sock.a 00:05:48.817 SO libspdk_event_keyring.so.1.0 00:05:48.817 SO libspdk_event_fsdev.so.1.0 00:05:48.817 SO libspdk_event_vhost_blk.so.3.0 00:05:48.817 LIB libspdk_event_iobuf.a 00:05:48.817 SO libspdk_event_vfu_tgt.so.3.0 00:05:48.817 SO libspdk_event_scheduler.so.4.0 00:05:48.817 SO libspdk_event_vmd.so.6.0 00:05:48.817 SO libspdk_event_sock.so.5.0 00:05:48.817 SO libspdk_event_iobuf.so.3.0 00:05:48.817 SYMLINK libspdk_event_keyring.so 00:05:48.817 SYMLINK libspdk_event_vhost_blk.so 00:05:48.817 SYMLINK libspdk_event_fsdev.so 00:05:48.817 SYMLINK libspdk_event_vfu_tgt.so 00:05:48.817 SYMLINK libspdk_event_scheduler.so 00:05:48.817 SYMLINK libspdk_event_sock.so 00:05:48.817 SYMLINK libspdk_event_vmd.so 00:05:48.817 SYMLINK libspdk_event_iobuf.so 00:05:49.075 CC module/event/subsystems/accel/accel.o 00:05:49.335 LIB libspdk_event_accel.a 00:05:49.335 SO libspdk_event_accel.so.6.0 00:05:49.335 SYMLINK libspdk_event_accel.so 00:05:49.594 CC module/event/subsystems/bdev/bdev.o 00:05:49.594 LIB libspdk_event_bdev.a 00:05:49.594 SO libspdk_event_bdev.so.6.0 00:05:49.852 SYMLINK libspdk_event_bdev.so 00:05:49.852 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:49.852 CC module/event/subsystems/scsi/scsi.o 00:05:49.852 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:49.852 CC module/event/subsystems/nbd/nbd.o 00:05:49.852 CC module/event/subsystems/ublk/ublk.o 00:05:50.110 LIB libspdk_event_nbd.a 00:05:50.110 LIB libspdk_event_ublk.a 00:05:50.110 LIB libspdk_event_scsi.a 00:05:50.110 SO libspdk_event_nbd.so.6.0 00:05:50.110 SO libspdk_event_ublk.so.3.0 00:05:50.110 SO libspdk_event_scsi.so.6.0 00:05:50.110 SYMLINK libspdk_event_nbd.so 00:05:50.110 SYMLINK libspdk_event_ublk.so 00:05:50.110 SYMLINK libspdk_event_scsi.so 00:05:50.110 LIB libspdk_event_nvmf.a 00:05:50.110 SO libspdk_event_nvmf.so.6.0 00:05:50.368 SYMLINK libspdk_event_nvmf.so 00:05:50.368 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:50.368 CC module/event/subsystems/iscsi/iscsi.o 00:05:50.368 LIB libspdk_event_vhost_scsi.a 00:05:50.368 SO libspdk_event_vhost_scsi.so.3.0 00:05:50.368 LIB libspdk_event_iscsi.a 00:05:50.627 SO libspdk_event_iscsi.so.6.0 00:05:50.627 SYMLINK libspdk_event_vhost_scsi.so 00:05:50.627 SYMLINK libspdk_event_iscsi.so 00:05:50.627 SO libspdk.so.6.0 00:05:50.627 SYMLINK libspdk.so 00:05:50.892 CC app/trace_record/trace_record.o 00:05:50.892 CXX app/trace/trace.o 00:05:50.892 CC app/spdk_nvme_identify/identify.o 00:05:50.892 CC app/spdk_top/spdk_top.o 00:05:50.892 CC app/spdk_nvme_perf/perf.o 00:05:50.892 CC app/spdk_lspci/spdk_lspci.o 00:05:50.892 TEST_HEADER include/spdk/accel.h 00:05:50.892 CC test/rpc_client/rpc_client_test.o 00:05:50.892 TEST_HEADER include/spdk/accel_module.h 00:05:50.892 TEST_HEADER include/spdk/assert.h 00:05:50.892 TEST_HEADER include/spdk/base64.h 00:05:50.892 TEST_HEADER include/spdk/barrier.h 00:05:50.892 CC app/spdk_nvme_discover/discovery_aer.o 00:05:50.892 TEST_HEADER include/spdk/bdev.h 00:05:50.892 TEST_HEADER include/spdk/bdev_module.h 00:05:50.892 TEST_HEADER include/spdk/bdev_zone.h 00:05:50.892 TEST_HEADER include/spdk/bit_array.h 00:05:50.892 TEST_HEADER include/spdk/bit_pool.h 00:05:50.892 TEST_HEADER include/spdk/blob_bdev.h 00:05:50.892 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:50.892 TEST_HEADER include/spdk/blobfs.h 00:05:50.892 TEST_HEADER include/spdk/blob.h 00:05:50.892 TEST_HEADER include/spdk/config.h 00:05:50.892 TEST_HEADER include/spdk/conf.h 00:05:50.892 TEST_HEADER include/spdk/cpuset.h 00:05:50.892 TEST_HEADER include/spdk/crc16.h 00:05:50.892 TEST_HEADER include/spdk/crc32.h 00:05:50.892 TEST_HEADER include/spdk/crc64.h 00:05:50.892 TEST_HEADER include/spdk/dif.h 00:05:50.892 TEST_HEADER include/spdk/dma.h 00:05:50.892 TEST_HEADER include/spdk/endian.h 00:05:50.892 TEST_HEADER include/spdk/env_dpdk.h 00:05:50.892 TEST_HEADER include/spdk/env.h 00:05:50.892 TEST_HEADER include/spdk/event.h 00:05:50.892 TEST_HEADER include/spdk/fd.h 00:05:50.892 TEST_HEADER include/spdk/fd_group.h 00:05:50.892 TEST_HEADER include/spdk/file.h 00:05:50.892 TEST_HEADER include/spdk/fsdev_module.h 00:05:50.892 TEST_HEADER include/spdk/fsdev.h 00:05:50.892 TEST_HEADER include/spdk/ftl.h 00:05:50.892 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:50.892 TEST_HEADER include/spdk/gpt_spec.h 00:05:50.892 TEST_HEADER include/spdk/histogram_data.h 00:05:50.892 TEST_HEADER include/spdk/hexlify.h 00:05:50.892 TEST_HEADER include/spdk/idxd.h 00:05:50.892 TEST_HEADER include/spdk/idxd_spec.h 00:05:50.892 TEST_HEADER include/spdk/init.h 00:05:50.892 TEST_HEADER include/spdk/ioat.h 00:05:50.892 TEST_HEADER include/spdk/iscsi_spec.h 00:05:50.892 TEST_HEADER include/spdk/json.h 00:05:50.892 TEST_HEADER include/spdk/ioat_spec.h 00:05:50.892 TEST_HEADER include/spdk/jsonrpc.h 00:05:50.892 TEST_HEADER include/spdk/keyring.h 00:05:50.892 TEST_HEADER include/spdk/keyring_module.h 00:05:50.892 TEST_HEADER include/spdk/likely.h 00:05:50.892 TEST_HEADER include/spdk/log.h 00:05:50.892 TEST_HEADER include/spdk/lvol.h 00:05:50.892 TEST_HEADER include/spdk/md5.h 00:05:50.892 TEST_HEADER include/spdk/memory.h 00:05:50.892 TEST_HEADER include/spdk/mmio.h 00:05:50.892 TEST_HEADER include/spdk/nbd.h 00:05:50.892 TEST_HEADER include/spdk/net.h 00:05:50.892 TEST_HEADER include/spdk/notify.h 00:05:50.892 TEST_HEADER include/spdk/nvme.h 00:05:50.892 TEST_HEADER include/spdk/nvme_intel.h 00:05:50.892 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:50.892 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:50.892 TEST_HEADER include/spdk/nvme_zns.h 00:05:50.892 TEST_HEADER include/spdk/nvme_spec.h 00:05:50.892 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:50.892 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:50.892 TEST_HEADER include/spdk/nvmf.h 00:05:50.892 TEST_HEADER include/spdk/nvmf_spec.h 00:05:50.892 TEST_HEADER include/spdk/nvmf_transport.h 00:05:50.892 TEST_HEADER include/spdk/opal.h 00:05:50.892 TEST_HEADER include/spdk/opal_spec.h 00:05:50.892 TEST_HEADER include/spdk/pci_ids.h 00:05:50.892 TEST_HEADER include/spdk/pipe.h 00:05:50.892 TEST_HEADER include/spdk/queue.h 00:05:50.892 TEST_HEADER include/spdk/reduce.h 00:05:50.892 TEST_HEADER include/spdk/rpc.h 00:05:50.892 TEST_HEADER include/spdk/scsi.h 00:05:50.892 TEST_HEADER include/spdk/scheduler.h 00:05:50.892 TEST_HEADER include/spdk/scsi_spec.h 00:05:50.892 TEST_HEADER include/spdk/sock.h 00:05:50.892 TEST_HEADER include/spdk/stdinc.h 00:05:50.892 TEST_HEADER include/spdk/string.h 00:05:50.892 TEST_HEADER include/spdk/trace.h 00:05:50.892 TEST_HEADER include/spdk/thread.h 00:05:50.892 TEST_HEADER include/spdk/trace_parser.h 00:05:50.892 TEST_HEADER include/spdk/ublk.h 00:05:50.892 TEST_HEADER include/spdk/tree.h 00:05:50.892 TEST_HEADER include/spdk/util.h 00:05:50.892 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:50.892 TEST_HEADER include/spdk/uuid.h 00:05:50.892 TEST_HEADER include/spdk/version.h 00:05:50.892 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:50.892 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:50.892 TEST_HEADER include/spdk/vhost.h 00:05:50.892 TEST_HEADER include/spdk/vmd.h 00:05:50.892 TEST_HEADER include/spdk/xor.h 00:05:50.892 TEST_HEADER include/spdk/zipf.h 00:05:50.892 CXX test/cpp_headers/accel.o 00:05:50.892 CXX test/cpp_headers/accel_module.o 00:05:50.892 CXX test/cpp_headers/assert.o 00:05:50.892 CXX test/cpp_headers/barrier.o 00:05:50.892 CXX test/cpp_headers/base64.o 00:05:50.892 CXX test/cpp_headers/bdev.o 00:05:50.892 CXX test/cpp_headers/bdev_module.o 00:05:50.892 CXX test/cpp_headers/bdev_zone.o 00:05:50.892 CXX test/cpp_headers/bit_array.o 00:05:50.892 CXX test/cpp_headers/bit_pool.o 00:05:50.892 CXX test/cpp_headers/blob_bdev.o 00:05:50.892 CXX test/cpp_headers/blobfs_bdev.o 00:05:50.892 CXX test/cpp_headers/blobfs.o 00:05:50.892 CXX test/cpp_headers/blob.o 00:05:50.892 CXX test/cpp_headers/conf.o 00:05:50.892 CXX test/cpp_headers/config.o 00:05:50.892 CC app/spdk_dd/spdk_dd.o 00:05:50.892 CXX test/cpp_headers/cpuset.o 00:05:50.892 CC app/nvmf_tgt/nvmf_main.o 00:05:50.892 CXX test/cpp_headers/crc16.o 00:05:50.892 CC app/iscsi_tgt/iscsi_tgt.o 00:05:50.892 CC app/spdk_tgt/spdk_tgt.o 00:05:50.892 CXX test/cpp_headers/crc32.o 00:05:50.892 CC app/fio/nvme/fio_plugin.o 00:05:50.892 CC examples/ioat/perf/perf.o 00:05:50.892 CC examples/ioat/verify/verify.o 00:05:50.892 CC test/app/histogram_perf/histogram_perf.o 00:05:50.892 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:50.892 CC examples/util/zipf/zipf.o 00:05:50.892 CC test/app/jsoncat/jsoncat.o 00:05:50.892 CC test/env/memory/memory_ut.o 00:05:50.892 CC test/env/pci/pci_ut.o 00:05:50.892 CC test/env/vtophys/vtophys.o 00:05:50.892 CC test/thread/poller_perf/poller_perf.o 00:05:51.157 CC test/app/stub/stub.o 00:05:51.157 CC app/fio/bdev/fio_plugin.o 00:05:51.157 CC test/dma/test_dma/test_dma.o 00:05:51.157 CC test/app/bdev_svc/bdev_svc.o 00:05:51.157 LINK spdk_lspci 00:05:51.157 CC test/env/mem_callbacks/mem_callbacks.o 00:05:51.157 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:51.157 LINK rpc_client_test 00:05:51.420 LINK spdk_nvme_discover 00:05:51.420 LINK jsoncat 00:05:51.420 LINK histogram_perf 00:05:51.420 LINK vtophys 00:05:51.420 LINK zipf 00:05:51.420 CXX test/cpp_headers/crc64.o 00:05:51.420 LINK nvmf_tgt 00:05:51.420 CXX test/cpp_headers/dif.o 00:05:51.420 LINK env_dpdk_post_init 00:05:51.420 CXX test/cpp_headers/dma.o 00:05:51.420 LINK spdk_trace_record 00:05:51.420 LINK interrupt_tgt 00:05:51.420 LINK poller_perf 00:05:51.420 CXX test/cpp_headers/endian.o 00:05:51.420 CXX test/cpp_headers/env_dpdk.o 00:05:51.420 CXX test/cpp_headers/env.o 00:05:51.420 CXX test/cpp_headers/event.o 00:05:51.420 CXX test/cpp_headers/fd_group.o 00:05:51.420 CXX test/cpp_headers/fd.o 00:05:51.420 CXX test/cpp_headers/file.o 00:05:51.420 CXX test/cpp_headers/fsdev.o 00:05:51.420 CXX test/cpp_headers/fsdev_module.o 00:05:51.420 LINK stub 00:05:51.420 LINK ioat_perf 00:05:51.420 LINK iscsi_tgt 00:05:51.420 LINK verify 00:05:51.420 CXX test/cpp_headers/ftl.o 00:05:51.420 LINK bdev_svc 00:05:51.420 LINK spdk_tgt 00:05:51.420 CXX test/cpp_headers/fuse_dispatcher.o 00:05:51.420 CXX test/cpp_headers/gpt_spec.o 00:05:51.420 CXX test/cpp_headers/hexlify.o 00:05:51.420 CXX test/cpp_headers/histogram_data.o 00:05:51.684 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:51.684 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:51.684 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:51.684 CXX test/cpp_headers/idxd.o 00:05:51.684 CXX test/cpp_headers/idxd_spec.o 00:05:51.684 CXX test/cpp_headers/init.o 00:05:51.684 CXX test/cpp_headers/ioat.o 00:05:51.684 CXX test/cpp_headers/ioat_spec.o 00:05:51.684 LINK spdk_dd 00:05:51.684 CXX test/cpp_headers/iscsi_spec.o 00:05:51.684 LINK spdk_trace 00:05:51.684 CXX test/cpp_headers/json.o 00:05:51.684 CXX test/cpp_headers/jsonrpc.o 00:05:51.684 CXX test/cpp_headers/keyring.o 00:05:51.684 CXX test/cpp_headers/keyring_module.o 00:05:51.684 CXX test/cpp_headers/likely.o 00:05:51.950 CXX test/cpp_headers/log.o 00:05:51.950 CXX test/cpp_headers/lvol.o 00:05:51.950 CXX test/cpp_headers/md5.o 00:05:51.950 CXX test/cpp_headers/memory.o 00:05:51.950 CXX test/cpp_headers/mmio.o 00:05:51.950 LINK pci_ut 00:05:51.950 CXX test/cpp_headers/nbd.o 00:05:51.950 CXX test/cpp_headers/net.o 00:05:51.950 CXX test/cpp_headers/notify.o 00:05:51.950 CXX test/cpp_headers/nvme.o 00:05:51.950 CXX test/cpp_headers/nvme_intel.o 00:05:51.950 CXX test/cpp_headers/nvme_ocssd.o 00:05:51.950 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:51.950 CXX test/cpp_headers/nvme_spec.o 00:05:51.950 CXX test/cpp_headers/nvme_zns.o 00:05:51.950 CXX test/cpp_headers/nvmf_cmd.o 00:05:51.950 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:51.950 CXX test/cpp_headers/nvmf.o 00:05:51.950 CXX test/cpp_headers/nvmf_spec.o 00:05:51.950 CC examples/thread/thread/thread_ex.o 00:05:51.950 CXX test/cpp_headers/nvmf_transport.o 00:05:51.950 CC examples/sock/hello_world/hello_sock.o 00:05:52.211 LINK nvme_fuzz 00:05:52.211 CXX test/cpp_headers/opal.o 00:05:52.211 CC examples/idxd/perf/perf.o 00:05:52.211 CC examples/vmd/lsvmd/lsvmd.o 00:05:52.211 CC test/event/event_perf/event_perf.o 00:05:52.211 CC examples/vmd/led/led.o 00:05:52.211 CXX test/cpp_headers/opal_spec.o 00:05:52.211 CXX test/cpp_headers/pci_ids.o 00:05:52.211 LINK spdk_bdev 00:05:52.211 LINK spdk_nvme 00:05:52.211 LINK test_dma 00:05:52.211 CXX test/cpp_headers/pipe.o 00:05:52.211 CC test/event/reactor/reactor.o 00:05:52.211 CXX test/cpp_headers/queue.o 00:05:52.211 CC test/event/reactor_perf/reactor_perf.o 00:05:52.211 CXX test/cpp_headers/reduce.o 00:05:52.211 CXX test/cpp_headers/rpc.o 00:05:52.211 CXX test/cpp_headers/scheduler.o 00:05:52.211 CXX test/cpp_headers/scsi.o 00:05:52.211 CXX test/cpp_headers/scsi_spec.o 00:05:52.211 CXX test/cpp_headers/sock.o 00:05:52.211 CXX test/cpp_headers/stdinc.o 00:05:52.211 CXX test/cpp_headers/string.o 00:05:52.211 CXX test/cpp_headers/thread.o 00:05:52.211 CC test/event/app_repeat/app_repeat.o 00:05:52.211 CXX test/cpp_headers/trace.o 00:05:52.211 CXX test/cpp_headers/trace_parser.o 00:05:52.211 CXX test/cpp_headers/tree.o 00:05:52.475 CXX test/cpp_headers/ublk.o 00:05:52.475 CXX test/cpp_headers/util.o 00:05:52.475 CXX test/cpp_headers/uuid.o 00:05:52.475 CXX test/cpp_headers/version.o 00:05:52.475 CXX test/cpp_headers/vfio_user_pci.o 00:05:52.475 LINK vhost_fuzz 00:05:52.475 CXX test/cpp_headers/vfio_user_spec.o 00:05:52.475 LINK lsvmd 00:05:52.475 CXX test/cpp_headers/vhost.o 00:05:52.475 CXX test/cpp_headers/vmd.o 00:05:52.475 CC test/event/scheduler/scheduler.o 00:05:52.475 CXX test/cpp_headers/xor.o 00:05:52.475 CXX test/cpp_headers/zipf.o 00:05:52.475 LINK mem_callbacks 00:05:52.475 LINK led 00:05:52.475 LINK event_perf 00:05:52.475 LINK spdk_nvme_perf 00:05:52.475 CC app/vhost/vhost.o 00:05:52.475 LINK reactor 00:05:52.475 LINK spdk_nvme_identify 00:05:52.475 LINK reactor_perf 00:05:52.475 LINK thread 00:05:52.475 LINK spdk_top 00:05:52.475 LINK hello_sock 00:05:52.732 LINK app_repeat 00:05:52.732 LINK idxd_perf 00:05:52.732 LINK vhost 00:05:52.732 CC test/nvme/reset/reset.o 00:05:52.732 CC test/nvme/e2edp/nvme_dp.o 00:05:52.732 CC test/nvme/startup/startup.o 00:05:52.732 CC test/nvme/aer/aer.o 00:05:52.732 CC test/nvme/simple_copy/simple_copy.o 00:05:52.732 CC test/nvme/sgl/sgl.o 00:05:52.732 CC test/nvme/fused_ordering/fused_ordering.o 00:05:52.732 CC test/nvme/cuse/cuse.o 00:05:52.732 CC test/nvme/connect_stress/connect_stress.o 00:05:52.732 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:52.732 CC test/nvme/compliance/nvme_compliance.o 00:05:52.733 CC test/nvme/fdp/fdp.o 00:05:52.733 CC test/nvme/overhead/overhead.o 00:05:52.733 CC test/nvme/reserve/reserve.o 00:05:52.733 CC test/nvme/err_injection/err_injection.o 00:05:52.733 LINK scheduler 00:05:52.733 CC test/nvme/boot_partition/boot_partition.o 00:05:52.733 CC test/accel/dif/dif.o 00:05:52.733 CC test/blobfs/mkfs/mkfs.o 00:05:52.989 CC test/lvol/esnap/esnap.o 00:05:52.989 LINK boot_partition 00:05:52.989 LINK startup 00:05:52.989 CC examples/nvme/hello_world/hello_world.o 00:05:52.989 CC examples/nvme/reconnect/reconnect.o 00:05:52.989 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:52.989 LINK connect_stress 00:05:52.989 CC examples/nvme/abort/abort.o 00:05:52.989 CC examples/nvme/arbitration/arbitration.o 00:05:52.989 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:52.989 CC examples/nvme/hotplug/hotplug.o 00:05:52.989 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:52.989 LINK err_injection 00:05:52.989 LINK fused_ordering 00:05:52.989 CC examples/accel/perf/accel_perf.o 00:05:52.989 LINK doorbell_aers 00:05:52.989 LINK reserve 00:05:53.247 CC examples/blob/cli/blobcli.o 00:05:53.247 CC examples/blob/hello_world/hello_blob.o 00:05:53.247 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:53.247 LINK mkfs 00:05:53.247 LINK nvme_dp 00:05:53.247 LINK reset 00:05:53.247 LINK sgl 00:05:53.247 LINK simple_copy 00:05:53.247 LINK memory_ut 00:05:53.247 LINK cmb_copy 00:05:53.247 LINK pmr_persistence 00:05:53.247 LINK aer 00:05:53.247 LINK overhead 00:05:53.504 LINK hotplug 00:05:53.504 LINK nvme_compliance 00:05:53.504 LINK fdp 00:05:53.504 LINK hello_world 00:05:53.504 LINK arbitration 00:05:53.504 LINK hello_blob 00:05:53.504 LINK hello_fsdev 00:05:53.504 LINK abort 00:05:53.504 LINK reconnect 00:05:53.761 LINK dif 00:05:53.761 LINK blobcli 00:05:53.761 LINK nvme_manage 00:05:53.761 LINK accel_perf 00:05:54.019 CC test/bdev/bdevio/bdevio.o 00:05:54.019 CC examples/bdev/hello_world/hello_bdev.o 00:05:54.019 CC examples/bdev/bdevperf/bdevperf.o 00:05:54.277 LINK iscsi_fuzz 00:05:54.277 LINK hello_bdev 00:05:54.551 LINK cuse 00:05:54.551 LINK bdevio 00:05:54.808 LINK bdevperf 00:05:55.374 CC examples/nvmf/nvmf/nvmf.o 00:05:55.632 LINK nvmf 00:05:58.189 LINK esnap 00:05:58.755 00:05:58.755 real 1m10.056s 00:05:58.755 user 11m52.980s 00:05:58.755 sys 2m38.319s 00:05:58.755 10:16:30 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:58.755 10:16:30 make -- common/autotest_common.sh@10 -- $ set +x 00:05:58.755 ************************************ 00:05:58.755 END TEST make 00:05:58.755 ************************************ 00:05:58.755 10:16:30 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:58.755 10:16:30 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:58.755 10:16:30 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:58.755 10:16:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.755 10:16:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:58.755 10:16:30 -- pm/common@44 -- $ pid=2343226 00:05:58.755 10:16:30 -- pm/common@50 -- $ kill -TERM 2343226 00:05:58.755 10:16:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.755 10:16:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:58.755 10:16:30 -- pm/common@44 -- $ pid=2343228 00:05:58.755 10:16:30 -- pm/common@50 -- $ kill -TERM 2343228 00:05:58.755 10:16:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.755 10:16:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:58.755 10:16:30 -- pm/common@44 -- $ pid=2343230 00:05:58.755 10:16:30 -- pm/common@50 -- $ kill -TERM 2343230 00:05:58.755 10:16:30 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.756 10:16:30 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:58.756 10:16:30 -- pm/common@44 -- $ pid=2343261 00:05:58.756 10:16:30 -- pm/common@50 -- $ sudo -E kill -TERM 2343261 00:05:58.756 10:16:30 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:58.756 10:16:30 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:58.756 10:16:31 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:58.756 10:16:31 -- common/autotest_common.sh@1711 -- # lcov --version 00:05:58.756 10:16:31 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:58.756 10:16:31 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:58.756 10:16:31 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.756 10:16:31 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.756 10:16:31 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.756 10:16:31 -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.756 10:16:31 -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.756 10:16:31 -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.756 10:16:31 -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.756 10:16:31 -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.756 10:16:31 -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.756 10:16:31 -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.756 10:16:31 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.756 10:16:31 -- scripts/common.sh@344 -- # case "$op" in 00:05:58.756 10:16:31 -- scripts/common.sh@345 -- # : 1 00:05:58.756 10:16:31 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.756 10:16:31 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.756 10:16:31 -- scripts/common.sh@365 -- # decimal 1 00:05:58.756 10:16:31 -- scripts/common.sh@353 -- # local d=1 00:05:58.756 10:16:31 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.756 10:16:31 -- scripts/common.sh@355 -- # echo 1 00:05:58.756 10:16:31 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.756 10:16:31 -- scripts/common.sh@366 -- # decimal 2 00:05:58.756 10:16:31 -- scripts/common.sh@353 -- # local d=2 00:05:58.756 10:16:31 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.756 10:16:31 -- scripts/common.sh@355 -- # echo 2 00:05:58.756 10:16:31 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.756 10:16:31 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.756 10:16:31 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.756 10:16:31 -- scripts/common.sh@368 -- # return 0 00:05:58.756 10:16:31 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.756 10:16:31 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 10:16:31 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 10:16:31 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 10:16:31 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:58.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.756 --rc genhtml_branch_coverage=1 00:05:58.756 --rc genhtml_function_coverage=1 00:05:58.756 --rc genhtml_legend=1 00:05:58.756 --rc geninfo_all_blocks=1 00:05:58.756 --rc geninfo_unexecuted_blocks=1 00:05:58.756 00:05:58.756 ' 00:05:58.756 10:16:31 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.756 10:16:31 -- nvmf/common.sh@7 -- # uname -s 00:05:58.756 10:16:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.756 10:16:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.756 10:16:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.756 10:16:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.756 10:16:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:58.756 10:16:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:58.756 10:16:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.756 10:16:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:58.756 10:16:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.756 10:16:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:58.756 10:16:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:58.756 10:16:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:58.756 10:16:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.756 10:16:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:58.756 10:16:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:58.756 10:16:31 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.756 10:16:31 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.756 10:16:31 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.756 10:16:31 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.756 10:16:31 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.756 10:16:31 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.756 10:16:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.756 10:16:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.756 10:16:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.756 10:16:31 -- paths/export.sh@5 -- # export PATH 00:05:58.756 10:16:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.756 10:16:31 -- nvmf/common.sh@51 -- # : 0 00:05:58.756 10:16:31 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:58.756 10:16:31 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:58.756 10:16:31 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:58.756 10:16:31 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.756 10:16:31 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.756 10:16:31 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:58.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:58.756 10:16:31 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:58.756 10:16:31 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:58.756 10:16:31 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:58.756 10:16:31 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:58.756 10:16:31 -- spdk/autotest.sh@32 -- # uname -s 00:05:58.756 10:16:31 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:58.756 10:16:31 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:58.756 10:16:31 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:58.756 10:16:31 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:58.756 10:16:31 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:58.756 10:16:31 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:58.756 10:16:31 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:58.756 10:16:31 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:58.756 10:16:31 -- spdk/autotest.sh@48 -- # udevadm_pid=2403249 00:05:58.756 10:16:31 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:58.756 10:16:31 -- pm/common@17 -- # local monitor 00:05:58.756 10:16:31 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:58.756 10:16:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.756 10:16:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.756 10:16:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.756 10:16:31 -- pm/common@21 -- # date +%s 00:05:58.756 10:16:31 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:58.756 10:16:31 -- pm/common@21 -- # date +%s 00:05:58.756 10:16:31 -- pm/common@25 -- # sleep 1 00:05:58.756 10:16:31 -- pm/common@21 -- # date +%s 00:05:58.756 10:16:31 -- pm/common@21 -- # date +%s 00:05:58.756 10:16:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735791 00:05:58.756 10:16:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735791 00:05:58.756 10:16:31 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735791 00:05:58.756 10:16:31 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733735791 00:05:59.015 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735791_collect-vmstat.pm.log 00:05:59.015 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735791_collect-cpu-load.pm.log 00:05:59.015 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735791_collect-cpu-temp.pm.log 00:05:59.015 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733735791_collect-bmc-pm.bmc.pm.log 00:05:59.954 10:16:32 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:59.954 10:16:32 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:59.954 10:16:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:59.954 10:16:32 -- common/autotest_common.sh@10 -- # set +x 00:05:59.954 10:16:32 -- spdk/autotest.sh@59 -- # create_test_list 00:05:59.954 10:16:32 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:59.954 10:16:32 -- common/autotest_common.sh@10 -- # set +x 00:05:59.954 10:16:32 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:59.954 10:16:32 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.954 10:16:32 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.954 10:16:32 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:59.954 10:16:32 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.954 10:16:32 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:59.954 10:16:32 -- common/autotest_common.sh@1457 -- # uname 00:05:59.954 10:16:32 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:59.954 10:16:32 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:59.954 10:16:32 -- common/autotest_common.sh@1477 -- # uname 00:05:59.954 10:16:32 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:59.954 10:16:32 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:59.954 10:16:32 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:59.954 lcov: LCOV version 1.15 00:05:59.954 10:16:32 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:32.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:32.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:37.294 10:17:09 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:37.294 10:17:09 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.294 10:17:09 -- common/autotest_common.sh@10 -- # set +x 00:06:37.294 10:17:09 -- spdk/autotest.sh@78 -- # rm -f 00:06:37.294 10:17:09 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:38.667 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:06:38.667 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:06:38.667 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:06:38.667 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:06:38.667 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:06:38.667 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:06:38.667 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:06:38.667 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:06:38.667 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:06:38.926 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:06:38.926 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:06:38.926 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:06:38.926 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:06:38.926 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:06:38.926 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:06:38.926 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:06:38.926 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:06:38.926 10:17:11 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:38.926 10:17:11 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:38.926 10:17:11 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:38.926 10:17:11 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:38.926 10:17:11 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:38.926 10:17:11 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:38.926 10:17:11 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:38.926 10:17:11 -- common/autotest_common.sh@1669 -- # bdf=0000:0b:00.0 00:06:38.926 10:17:11 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:38.926 10:17:11 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:38.926 10:17:11 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:38.926 10:17:11 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:38.926 10:17:11 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:38.926 10:17:11 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:38.926 10:17:11 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:38.926 10:17:11 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:38.926 10:17:11 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:38.927 10:17:11 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:38.927 10:17:11 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:38.927 No valid GPT data, bailing 00:06:38.927 10:17:11 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:39.184 10:17:11 -- scripts/common.sh@394 -- # pt= 00:06:39.184 10:17:11 -- scripts/common.sh@395 -- # return 1 00:06:39.184 10:17:11 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:39.184 1+0 records in 00:06:39.184 1+0 records out 00:06:39.184 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00216979 s, 483 MB/s 00:06:39.184 10:17:11 -- spdk/autotest.sh@105 -- # sync 00:06:39.184 10:17:11 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:39.184 10:17:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:39.184 10:17:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:41.086 10:17:13 -- spdk/autotest.sh@111 -- # uname -s 00:06:41.086 10:17:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:41.086 10:17:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:41.086 10:17:13 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:42.462 Hugepages 00:06:42.462 node hugesize free / total 00:06:42.462 node0 1048576kB 0 / 0 00:06:42.462 node0 2048kB 0 / 0 00:06:42.462 node1 1048576kB 0 / 0 00:06:42.462 node1 2048kB 0 / 0 00:06:42.462 00:06:42.462 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:42.462 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:42.462 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:42.462 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:42.462 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:42.462 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:42.462 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:42.462 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:42.462 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:42.462 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:06:42.462 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:42.462 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:42.462 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:42.462 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:42.462 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:42.462 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:42.462 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:42.462 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:42.462 10:17:14 -- spdk/autotest.sh@117 -- # uname -s 00:06:42.462 10:17:14 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:42.462 10:17:14 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:42.462 10:17:14 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:43.883 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:43.883 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:43.883 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:43.883 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:43.883 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:43.883 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:43.883 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:43.883 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:43.883 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:43.883 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:43.883 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:43.883 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:43.883 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:43.883 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:43.883 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:43.883 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:44.874 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:06:44.874 10:17:17 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:46.246 10:17:18 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:46.246 10:17:18 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:46.246 10:17:18 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:46.246 10:17:18 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:46.246 10:17:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:46.246 10:17:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:46.246 10:17:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:46.246 10:17:18 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:46.246 10:17:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:46.246 10:17:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:46.247 10:17:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:06:46.247 10:17:18 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:47.182 Waiting for block devices as requested 00:06:47.182 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:47.182 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:47.442 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:47.442 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:47.442 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:47.442 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:47.701 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:47.701 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:47.701 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:06:47.959 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:47.959 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:47.959 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:48.218 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:48.218 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:48.218 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:48.479 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:48.479 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:48.479 10:17:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:48.479 10:17:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:06:48.479 10:17:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:48.479 10:17:20 -- common/autotest_common.sh@1487 -- # grep 0000:0b:00.0/nvme/nvme 00:06:48.479 10:17:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:06:48.479 10:17:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:06:48.479 10:17:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:06:48.479 10:17:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:48.479 10:17:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:48.479 10:17:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:48.479 10:17:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:48.479 10:17:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:48.479 10:17:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:48.479 10:17:20 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:06:48.479 10:17:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:48.479 10:17:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:48.479 10:17:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:48.479 10:17:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:48.479 10:17:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:48.479 10:17:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:48.479 10:17:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:48.479 10:17:20 -- common/autotest_common.sh@1543 -- # continue 00:06:48.479 10:17:20 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:48.479 10:17:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.479 10:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:48.479 10:17:20 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:48.479 10:17:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.479 10:17:20 -- common/autotest_common.sh@10 -- # set +x 00:06:48.479 10:17:20 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:49.855 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:49.855 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:49.855 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:49.855 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:49.855 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:49.855 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:49.855 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:49.855 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:49.855 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:49.855 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:49.855 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:49.855 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:49.855 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:49.855 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:49.855 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:50.115 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:51.054 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:06:51.054 10:17:23 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:51.054 10:17:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:51.054 10:17:23 -- common/autotest_common.sh@10 -- # set +x 00:06:51.054 10:17:23 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:51.054 10:17:23 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:51.054 10:17:23 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:51.054 10:17:23 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:51.054 10:17:23 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:51.054 10:17:23 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:51.054 10:17:23 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:51.054 10:17:23 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:51.054 10:17:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:51.054 10:17:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:51.054 10:17:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:51.054 10:17:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:51.054 10:17:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:51.054 10:17:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:51.054 10:17:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:06:51.054 10:17:23 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:51.054 10:17:23 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:06:51.054 10:17:23 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:51.054 10:17:23 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:51.054 10:17:23 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:51.054 10:17:23 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:06:51.054 10:17:23 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:0b:00.0 00:06:51.054 10:17:23 -- common/autotest_common.sh@1579 -- # [[ -z 0000:0b:00.0 ]] 00:06:51.054 10:17:23 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2413750 00:06:51.054 10:17:23 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:51.054 10:17:23 -- common/autotest_common.sh@1585 -- # waitforlisten 2413750 00:06:51.054 10:17:23 -- common/autotest_common.sh@835 -- # '[' -z 2413750 ']' 00:06:51.054 10:17:23 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.054 10:17:23 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.054 10:17:23 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.054 10:17:23 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.054 10:17:23 -- common/autotest_common.sh@10 -- # set +x 00:06:51.311 [2024-12-09 10:17:23.519923] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:06:51.311 [2024-12-09 10:17:23.520010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413750 ] 00:06:51.311 [2024-12-09 10:17:23.585096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.311 [2024-12-09 10:17:23.641035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.570 10:17:23 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.570 10:17:23 -- common/autotest_common.sh@868 -- # return 0 00:06:51.570 10:17:23 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:06:51.570 10:17:23 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:06:51.570 10:17:23 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:06:54.849 nvme0n1 00:06:54.849 10:17:27 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:54.849 [2024-12-09 10:17:27.274107] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:54.849 [2024-12-09 10:17:27.274172] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:54.849 request: 00:06:54.849 { 00:06:54.849 "nvme_ctrlr_name": "nvme0", 00:06:54.849 "password": "test", 00:06:54.849 "method": "bdev_nvme_opal_revert", 00:06:54.849 "req_id": 1 00:06:54.849 } 00:06:54.849 Got JSON-RPC error response 00:06:54.849 response: 00:06:54.849 { 00:06:54.849 "code": -32603, 00:06:54.849 "message": "Internal error" 00:06:54.849 } 00:06:55.107 10:17:27 -- common/autotest_common.sh@1591 -- # true 00:06:55.107 10:17:27 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:55.107 10:17:27 -- common/autotest_common.sh@1595 -- # killprocess 2413750 00:06:55.107 10:17:27 -- common/autotest_common.sh@954 -- # '[' -z 2413750 ']' 00:06:55.107 10:17:27 -- common/autotest_common.sh@958 -- # kill -0 2413750 00:06:55.107 10:17:27 -- common/autotest_common.sh@959 -- # uname 00:06:55.107 10:17:27 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.107 10:17:27 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2413750 00:06:55.107 10:17:27 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.107 10:17:27 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.107 10:17:27 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2413750' 00:06:55.107 killing process with pid 2413750 00:06:55.107 10:17:27 -- common/autotest_common.sh@973 -- # kill 2413750 00:06:55.107 10:17:27 -- common/autotest_common.sh@978 -- # wait 2413750 00:06:57.007 10:17:29 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:57.007 10:17:29 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:57.007 10:17:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:57.007 10:17:29 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:57.007 10:17:29 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:57.007 10:17:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:57.007 10:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:57.007 10:17:29 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:57.007 10:17:29 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:57.007 10:17:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.007 10:17:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.007 10:17:29 -- common/autotest_common.sh@10 -- # set +x 00:06:57.007 ************************************ 00:06:57.007 START TEST env 00:06:57.007 ************************************ 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:57.007 * Looking for test storage... 00:06:57.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1711 -- # lcov --version 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:57.007 10:17:29 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.007 10:17:29 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.007 10:17:29 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.007 10:17:29 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.007 10:17:29 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.007 10:17:29 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.007 10:17:29 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.007 10:17:29 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.007 10:17:29 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.007 10:17:29 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.007 10:17:29 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.007 10:17:29 env -- scripts/common.sh@344 -- # case "$op" in 00:06:57.007 10:17:29 env -- scripts/common.sh@345 -- # : 1 00:06:57.007 10:17:29 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.007 10:17:29 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.007 10:17:29 env -- scripts/common.sh@365 -- # decimal 1 00:06:57.007 10:17:29 env -- scripts/common.sh@353 -- # local d=1 00:06:57.007 10:17:29 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.007 10:17:29 env -- scripts/common.sh@355 -- # echo 1 00:06:57.007 10:17:29 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.007 10:17:29 env -- scripts/common.sh@366 -- # decimal 2 00:06:57.007 10:17:29 env -- scripts/common.sh@353 -- # local d=2 00:06:57.007 10:17:29 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.007 10:17:29 env -- scripts/common.sh@355 -- # echo 2 00:06:57.007 10:17:29 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.007 10:17:29 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.007 10:17:29 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.007 10:17:29 env -- scripts/common.sh@368 -- # return 0 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:57.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.007 --rc genhtml_branch_coverage=1 00:06:57.007 --rc genhtml_function_coverage=1 00:06:57.007 --rc genhtml_legend=1 00:06:57.007 --rc geninfo_all_blocks=1 00:06:57.007 --rc geninfo_unexecuted_blocks=1 00:06:57.007 00:06:57.007 ' 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:57.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.007 --rc genhtml_branch_coverage=1 00:06:57.007 --rc genhtml_function_coverage=1 00:06:57.007 --rc genhtml_legend=1 00:06:57.007 --rc geninfo_all_blocks=1 00:06:57.007 --rc geninfo_unexecuted_blocks=1 00:06:57.007 00:06:57.007 ' 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:57.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.007 --rc genhtml_branch_coverage=1 00:06:57.007 --rc genhtml_function_coverage=1 00:06:57.007 --rc genhtml_legend=1 00:06:57.007 --rc geninfo_all_blocks=1 00:06:57.007 --rc geninfo_unexecuted_blocks=1 00:06:57.007 00:06:57.007 ' 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:57.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.007 --rc genhtml_branch_coverage=1 00:06:57.007 --rc genhtml_function_coverage=1 00:06:57.007 --rc genhtml_legend=1 00:06:57.007 --rc geninfo_all_blocks=1 00:06:57.007 --rc geninfo_unexecuted_blocks=1 00:06:57.007 00:06:57.007 ' 00:06:57.007 10:17:29 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.007 10:17:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.007 10:17:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:57.007 ************************************ 00:06:57.007 START TEST env_memory 00:06:57.007 ************************************ 00:06:57.007 10:17:29 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:57.007 00:06:57.007 00:06:57.007 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.007 http://cunit.sourceforge.net/ 00:06:57.007 00:06:57.007 00:06:57.007 Suite: memory 00:06:57.007 Test: alloc and free memory map ...[2024-12-09 10:17:29.334122] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:57.007 passed 00:06:57.007 Test: mem map translation ...[2024-12-09 10:17:29.354112] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:57.007 [2024-12-09 10:17:29.354134] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:57.008 [2024-12-09 10:17:29.354194] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:57.008 [2024-12-09 10:17:29.354207] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:57.008 passed 00:06:57.008 Test: mem map registration ...[2024-12-09 10:17:29.395236] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:57.008 [2024-12-09 10:17:29.395255] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:57.008 passed 00:06:57.267 Test: mem map adjacent registrations ...passed 00:06:57.267 00:06:57.267 Run Summary: Type Total Ran Passed Failed Inactive 00:06:57.267 suites 1 1 n/a 0 0 00:06:57.267 tests 4 4 4 0 0 00:06:57.267 asserts 152 152 152 0 n/a 00:06:57.267 00:06:57.267 Elapsed time = 0.143 seconds 00:06:57.267 00:06:57.267 real 0m0.151s 00:06:57.267 user 0m0.146s 00:06:57.267 sys 0m0.005s 00:06:57.267 10:17:29 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.267 10:17:29 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:57.267 ************************************ 00:06:57.267 END TEST env_memory 00:06:57.267 ************************************ 00:06:57.267 10:17:29 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:57.267 10:17:29 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.267 10:17:29 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.267 10:17:29 env -- common/autotest_common.sh@10 -- # set +x 00:06:57.267 ************************************ 00:06:57.267 START TEST env_vtophys 00:06:57.267 ************************************ 00:06:57.267 10:17:29 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:57.267 EAL: lib.eal log level changed from notice to debug 00:06:57.267 EAL: Detected lcore 0 as core 0 on socket 0 00:06:57.267 EAL: Detected lcore 1 as core 1 on socket 0 00:06:57.267 EAL: Detected lcore 2 as core 2 on socket 0 00:06:57.267 EAL: Detected lcore 3 as core 3 on socket 0 00:06:57.267 EAL: Detected lcore 4 as core 4 on socket 0 00:06:57.267 EAL: Detected lcore 5 as core 5 on socket 0 00:06:57.267 EAL: Detected lcore 6 as core 8 on socket 0 00:06:57.267 EAL: Detected lcore 7 as core 9 on socket 0 00:06:57.267 EAL: Detected lcore 8 as core 10 on socket 0 00:06:57.267 EAL: Detected lcore 9 as core 11 on socket 0 00:06:57.267 EAL: Detected lcore 10 as core 12 on socket 0 00:06:57.267 EAL: Detected lcore 11 as core 13 on socket 0 00:06:57.267 EAL: Detected lcore 12 as core 0 on socket 1 00:06:57.267 EAL: Detected lcore 13 as core 1 on socket 1 00:06:57.267 EAL: Detected lcore 14 as core 2 on socket 1 00:06:57.267 EAL: Detected lcore 15 as core 3 on socket 1 00:06:57.267 EAL: Detected lcore 16 as core 4 on socket 1 00:06:57.267 EAL: Detected lcore 17 as core 5 on socket 1 00:06:57.267 EAL: Detected lcore 18 as core 8 on socket 1 00:06:57.267 EAL: Detected lcore 19 as core 9 on socket 1 00:06:57.267 EAL: Detected lcore 20 as core 10 on socket 1 00:06:57.267 EAL: Detected lcore 21 as core 11 on socket 1 00:06:57.267 EAL: Detected lcore 22 as core 12 on socket 1 00:06:57.267 EAL: Detected lcore 23 as core 13 on socket 1 00:06:57.267 EAL: Detected lcore 24 as core 0 on socket 0 00:06:57.267 EAL: Detected lcore 25 as core 1 on socket 0 00:06:57.267 EAL: Detected lcore 26 as core 2 on socket 0 00:06:57.267 EAL: Detected lcore 27 as core 3 on socket 0 00:06:57.267 EAL: Detected lcore 28 as core 4 on socket 0 00:06:57.267 EAL: Detected lcore 29 as core 5 on socket 0 00:06:57.267 EAL: Detected lcore 30 as core 8 on socket 0 00:06:57.267 EAL: Detected lcore 31 as core 9 on socket 0 00:06:57.267 EAL: Detected lcore 32 as core 10 on socket 0 00:06:57.267 EAL: Detected lcore 33 as core 11 on socket 0 00:06:57.267 EAL: Detected lcore 34 as core 12 on socket 0 00:06:57.267 EAL: Detected lcore 35 as core 13 on socket 0 00:06:57.267 EAL: Detected lcore 36 as core 0 on socket 1 00:06:57.267 EAL: Detected lcore 37 as core 1 on socket 1 00:06:57.267 EAL: Detected lcore 38 as core 2 on socket 1 00:06:57.267 EAL: Detected lcore 39 as core 3 on socket 1 00:06:57.267 EAL: Detected lcore 40 as core 4 on socket 1 00:06:57.267 EAL: Detected lcore 41 as core 5 on socket 1 00:06:57.267 EAL: Detected lcore 42 as core 8 on socket 1 00:06:57.267 EAL: Detected lcore 43 as core 9 on socket 1 00:06:57.267 EAL: Detected lcore 44 as core 10 on socket 1 00:06:57.267 EAL: Detected lcore 45 as core 11 on socket 1 00:06:57.267 EAL: Detected lcore 46 as core 12 on socket 1 00:06:57.267 EAL: Detected lcore 47 as core 13 on socket 1 00:06:57.267 EAL: Maximum logical cores by configuration: 128 00:06:57.267 EAL: Detected CPU lcores: 48 00:06:57.267 EAL: Detected NUMA nodes: 2 00:06:57.267 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:57.267 EAL: Detected shared linkage of DPDK 00:06:57.267 EAL: No shared files mode enabled, IPC will be disabled 00:06:57.267 EAL: Bus pci wants IOVA as 'DC' 00:06:57.267 EAL: Buses did not request a specific IOVA mode. 00:06:57.267 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:57.267 EAL: Selected IOVA mode 'VA' 00:06:57.267 EAL: Probing VFIO support... 00:06:57.267 EAL: IOMMU type 1 (Type 1) is supported 00:06:57.267 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:57.267 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:57.267 EAL: VFIO support initialized 00:06:57.267 EAL: Ask a virtual area of 0x2e000 bytes 00:06:57.267 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:57.267 EAL: Setting up physically contiguous memory... 00:06:57.267 EAL: Setting maximum number of open files to 524288 00:06:57.267 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:57.267 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:57.267 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:57.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.267 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:57.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.267 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:57.267 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:57.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.267 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:57.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.267 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:57.267 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:57.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.267 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:57.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.267 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:57.267 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:57.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.267 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:57.267 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:57.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.267 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:57.267 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:57.267 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:57.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.267 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:57.267 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.267 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:57.267 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:57.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.267 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:57.267 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.267 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.267 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:57.267 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:57.267 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.267 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:57.267 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.268 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.268 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:57.268 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:57.268 EAL: Ask a virtual area of 0x61000 bytes 00:06:57.268 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:57.268 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:57.268 EAL: Ask a virtual area of 0x400000000 bytes 00:06:57.268 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:57.268 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:57.268 EAL: Hugepages will be freed exactly as allocated. 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: TSC frequency is ~2700000 KHz 00:06:57.268 EAL: Main lcore 0 is ready (tid=7f420c996a00;cpuset=[0]) 00:06:57.268 EAL: Trying to obtain current memory policy. 00:06:57.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.268 EAL: Restoring previous memory policy: 0 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was expanded by 2MB 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:57.268 EAL: Mem event callback 'spdk:(nil)' registered 00:06:57.268 00:06:57.268 00:06:57.268 CUnit - A unit testing framework for C - Version 2.1-3 00:06:57.268 http://cunit.sourceforge.net/ 00:06:57.268 00:06:57.268 00:06:57.268 Suite: components_suite 00:06:57.268 Test: vtophys_malloc_test ...passed 00:06:57.268 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:57.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.268 EAL: Restoring previous memory policy: 4 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was expanded by 4MB 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was shrunk by 4MB 00:06:57.268 EAL: Trying to obtain current memory policy. 00:06:57.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.268 EAL: Restoring previous memory policy: 4 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was expanded by 6MB 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was shrunk by 6MB 00:06:57.268 EAL: Trying to obtain current memory policy. 00:06:57.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.268 EAL: Restoring previous memory policy: 4 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was expanded by 10MB 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was shrunk by 10MB 00:06:57.268 EAL: Trying to obtain current memory policy. 00:06:57.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.268 EAL: Restoring previous memory policy: 4 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was expanded by 18MB 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was shrunk by 18MB 00:06:57.268 EAL: Trying to obtain current memory policy. 00:06:57.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.268 EAL: Restoring previous memory policy: 4 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was expanded by 34MB 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was shrunk by 34MB 00:06:57.268 EAL: Trying to obtain current memory policy. 00:06:57.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.268 EAL: Restoring previous memory policy: 4 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was expanded by 66MB 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was shrunk by 66MB 00:06:57.268 EAL: Trying to obtain current memory policy. 00:06:57.268 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.268 EAL: Restoring previous memory policy: 4 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.268 EAL: request: mp_malloc_sync 00:06:57.268 EAL: No shared files mode enabled, IPC is disabled 00:06:57.268 EAL: Heap on socket 0 was expanded by 130MB 00:06:57.268 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.526 EAL: request: mp_malloc_sync 00:06:57.526 EAL: No shared files mode enabled, IPC is disabled 00:06:57.526 EAL: Heap on socket 0 was shrunk by 130MB 00:06:57.526 EAL: Trying to obtain current memory policy. 00:06:57.526 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.526 EAL: Restoring previous memory policy: 4 00:06:57.526 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.526 EAL: request: mp_malloc_sync 00:06:57.526 EAL: No shared files mode enabled, IPC is disabled 00:06:57.526 EAL: Heap on socket 0 was expanded by 258MB 00:06:57.526 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.526 EAL: request: mp_malloc_sync 00:06:57.526 EAL: No shared files mode enabled, IPC is disabled 00:06:57.526 EAL: Heap on socket 0 was shrunk by 258MB 00:06:57.526 EAL: Trying to obtain current memory policy. 00:06:57.526 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:57.783 EAL: Restoring previous memory policy: 4 00:06:57.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:57.783 EAL: request: mp_malloc_sync 00:06:57.783 EAL: No shared files mode enabled, IPC is disabled 00:06:57.783 EAL: Heap on socket 0 was expanded by 514MB 00:06:57.783 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.041 EAL: request: mp_malloc_sync 00:06:58.041 EAL: No shared files mode enabled, IPC is disabled 00:06:58.041 EAL: Heap on socket 0 was shrunk by 514MB 00:06:58.041 EAL: Trying to obtain current memory policy. 00:06:58.041 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:58.300 EAL: Restoring previous memory policy: 4 00:06:58.300 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.300 EAL: request: mp_malloc_sync 00:06:58.300 EAL: No shared files mode enabled, IPC is disabled 00:06:58.300 EAL: Heap on socket 0 was expanded by 1026MB 00:06:58.559 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.559 EAL: request: mp_malloc_sync 00:06:58.559 EAL: No shared files mode enabled, IPC is disabled 00:06:58.559 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:58.559 passed 00:06:58.559 00:06:58.559 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.559 suites 1 1 n/a 0 0 00:06:58.559 tests 2 2 2 0 0 00:06:58.559 asserts 497 497 497 0 n/a 00:06:58.559 00:06:58.559 Elapsed time = 1.361 seconds 00:06:58.559 EAL: Calling mem event callback 'spdk:(nil)' 00:06:58.559 EAL: request: mp_malloc_sync 00:06:58.559 EAL: No shared files mode enabled, IPC is disabled 00:06:58.559 EAL: Heap on socket 0 was shrunk by 2MB 00:06:58.559 EAL: No shared files mode enabled, IPC is disabled 00:06:58.559 EAL: No shared files mode enabled, IPC is disabled 00:06:58.559 EAL: No shared files mode enabled, IPC is disabled 00:06:58.559 00:06:58.559 real 0m1.486s 00:06:58.559 user 0m0.856s 00:06:58.559 sys 0m0.591s 00:06:58.559 10:17:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.559 10:17:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:58.559 ************************************ 00:06:58.559 END TEST env_vtophys 00:06:58.559 ************************************ 00:06:58.822 10:17:31 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:58.822 10:17:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.822 10:17:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.822 10:17:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:58.822 ************************************ 00:06:58.822 START TEST env_pci 00:06:58.822 ************************************ 00:06:58.822 10:17:31 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:58.822 00:06:58.822 00:06:58.822 CUnit - A unit testing framework for C - Version 2.1-3 00:06:58.822 http://cunit.sourceforge.net/ 00:06:58.822 00:06:58.822 00:06:58.822 Suite: pci 00:06:58.822 Test: pci_hook ...[2024-12-09 10:17:31.052307] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2414651 has claimed it 00:06:58.822 EAL: Cannot find device (10000:00:01.0) 00:06:58.822 EAL: Failed to attach device on primary process 00:06:58.822 passed 00:06:58.822 00:06:58.822 Run Summary: Type Total Ran Passed Failed Inactive 00:06:58.822 suites 1 1 n/a 0 0 00:06:58.822 tests 1 1 1 0 0 00:06:58.822 asserts 25 25 25 0 n/a 00:06:58.822 00:06:58.822 Elapsed time = 0.021 seconds 00:06:58.822 00:06:58.822 real 0m0.035s 00:06:58.822 user 0m0.010s 00:06:58.822 sys 0m0.024s 00:06:58.822 10:17:31 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.822 10:17:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:58.822 ************************************ 00:06:58.822 END TEST env_pci 00:06:58.822 ************************************ 00:06:58.822 10:17:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:58.822 10:17:31 env -- env/env.sh@15 -- # uname 00:06:58.822 10:17:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:58.822 10:17:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:58.822 10:17:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:58.822 10:17:31 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:58.822 10:17:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.822 10:17:31 env -- common/autotest_common.sh@10 -- # set +x 00:06:58.822 ************************************ 00:06:58.822 START TEST env_dpdk_post_init 00:06:58.822 ************************************ 00:06:58.822 10:17:31 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:58.822 EAL: Detected CPU lcores: 48 00:06:58.822 EAL: Detected NUMA nodes: 2 00:06:58.822 EAL: Detected shared linkage of DPDK 00:06:58.822 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:58.822 EAL: Selected IOVA mode 'VA' 00:06:58.822 EAL: VFIO support initialized 00:06:58.822 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:58.822 EAL: Using IOMMU type 1 (Type 1) 00:06:58.822 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:58.822 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:59.082 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:59.082 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:59.082 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:59.082 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:59.082 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:59.082 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:59.652 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:06:59.652 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:59.911 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:59.911 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:59.911 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:59.911 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:59.911 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:59.911 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:59.911 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:07:03.188 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:07:03.188 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:07:03.188 Starting DPDK initialization... 00:07:03.188 Starting SPDK post initialization... 00:07:03.188 SPDK NVMe probe 00:07:03.188 Attaching to 0000:0b:00.0 00:07:03.188 Attached to 0000:0b:00.0 00:07:03.188 Cleaning up... 00:07:03.188 00:07:03.188 real 0m4.352s 00:07:03.188 user 0m2.982s 00:07:03.188 sys 0m0.424s 00:07:03.188 10:17:35 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.188 10:17:35 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:03.188 ************************************ 00:07:03.188 END TEST env_dpdk_post_init 00:07:03.188 ************************************ 00:07:03.188 10:17:35 env -- env/env.sh@26 -- # uname 00:07:03.188 10:17:35 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:03.188 10:17:35 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:03.188 10:17:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.188 10:17:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.188 10:17:35 env -- common/autotest_common.sh@10 -- # set +x 00:07:03.188 ************************************ 00:07:03.188 START TEST env_mem_callbacks 00:07:03.188 ************************************ 00:07:03.188 10:17:35 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:03.188 EAL: Detected CPU lcores: 48 00:07:03.188 EAL: Detected NUMA nodes: 2 00:07:03.188 EAL: Detected shared linkage of DPDK 00:07:03.188 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:03.188 EAL: Selected IOVA mode 'VA' 00:07:03.188 EAL: VFIO support initialized 00:07:03.188 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:03.188 00:07:03.188 00:07:03.188 CUnit - A unit testing framework for C - Version 2.1-3 00:07:03.188 http://cunit.sourceforge.net/ 00:07:03.188 00:07:03.188 00:07:03.188 Suite: memory 00:07:03.188 Test: test ... 00:07:03.188 register 0x200000200000 2097152 00:07:03.188 malloc 3145728 00:07:03.188 register 0x200000400000 4194304 00:07:03.188 buf 0x200000500000 len 3145728 PASSED 00:07:03.188 malloc 64 00:07:03.188 buf 0x2000004fff40 len 64 PASSED 00:07:03.188 malloc 4194304 00:07:03.188 register 0x200000800000 6291456 00:07:03.188 buf 0x200000a00000 len 4194304 PASSED 00:07:03.188 free 0x200000500000 3145728 00:07:03.188 free 0x2000004fff40 64 00:07:03.188 unregister 0x200000400000 4194304 PASSED 00:07:03.188 free 0x200000a00000 4194304 00:07:03.188 unregister 0x200000800000 6291456 PASSED 00:07:03.188 malloc 8388608 00:07:03.188 register 0x200000400000 10485760 00:07:03.188 buf 0x200000600000 len 8388608 PASSED 00:07:03.188 free 0x200000600000 8388608 00:07:03.188 unregister 0x200000400000 10485760 PASSED 00:07:03.188 passed 00:07:03.188 00:07:03.188 Run Summary: Type Total Ran Passed Failed Inactive 00:07:03.188 suites 1 1 n/a 0 0 00:07:03.188 tests 1 1 1 0 0 00:07:03.188 asserts 15 15 15 0 n/a 00:07:03.188 00:07:03.188 Elapsed time = 0.004 seconds 00:07:03.188 00:07:03.188 real 0m0.048s 00:07:03.188 user 0m0.013s 00:07:03.188 sys 0m0.034s 00:07:03.188 10:17:35 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.188 10:17:35 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:03.188 ************************************ 00:07:03.188 END TEST env_mem_callbacks 00:07:03.188 ************************************ 00:07:03.188 00:07:03.188 real 0m6.481s 00:07:03.188 user 0m4.211s 00:07:03.188 sys 0m1.307s 00:07:03.188 10:17:35 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.188 10:17:35 env -- common/autotest_common.sh@10 -- # set +x 00:07:03.188 ************************************ 00:07:03.188 END TEST env 00:07:03.188 ************************************ 00:07:03.188 10:17:35 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:03.188 10:17:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.188 10:17:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.188 10:17:35 -- common/autotest_common.sh@10 -- # set +x 00:07:03.446 ************************************ 00:07:03.446 START TEST rpc 00:07:03.446 ************************************ 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:03.446 * Looking for test storage... 00:07:03.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:03.446 10:17:35 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.446 10:17:35 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.446 10:17:35 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.446 10:17:35 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.446 10:17:35 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.446 10:17:35 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.446 10:17:35 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.446 10:17:35 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.446 10:17:35 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.446 10:17:35 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.446 10:17:35 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.446 10:17:35 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:03.446 10:17:35 rpc -- scripts/common.sh@345 -- # : 1 00:07:03.446 10:17:35 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.446 10:17:35 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.446 10:17:35 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:03.446 10:17:35 rpc -- scripts/common.sh@353 -- # local d=1 00:07:03.446 10:17:35 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.446 10:17:35 rpc -- scripts/common.sh@355 -- # echo 1 00:07:03.446 10:17:35 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.446 10:17:35 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:03.446 10:17:35 rpc -- scripts/common.sh@353 -- # local d=2 00:07:03.446 10:17:35 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.446 10:17:35 rpc -- scripts/common.sh@355 -- # echo 2 00:07:03.446 10:17:35 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.446 10:17:35 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.446 10:17:35 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.446 10:17:35 rpc -- scripts/common.sh@368 -- # return 0 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:03.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.446 --rc genhtml_branch_coverage=1 00:07:03.446 --rc genhtml_function_coverage=1 00:07:03.446 --rc genhtml_legend=1 00:07:03.446 --rc geninfo_all_blocks=1 00:07:03.446 --rc geninfo_unexecuted_blocks=1 00:07:03.446 00:07:03.446 ' 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:03.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.446 --rc genhtml_branch_coverage=1 00:07:03.446 --rc genhtml_function_coverage=1 00:07:03.446 --rc genhtml_legend=1 00:07:03.446 --rc geninfo_all_blocks=1 00:07:03.446 --rc geninfo_unexecuted_blocks=1 00:07:03.446 00:07:03.446 ' 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:03.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.446 --rc genhtml_branch_coverage=1 00:07:03.446 --rc genhtml_function_coverage=1 00:07:03.446 --rc genhtml_legend=1 00:07:03.446 --rc geninfo_all_blocks=1 00:07:03.446 --rc geninfo_unexecuted_blocks=1 00:07:03.446 00:07:03.446 ' 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:03.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.446 --rc genhtml_branch_coverage=1 00:07:03.446 --rc genhtml_function_coverage=1 00:07:03.446 --rc genhtml_legend=1 00:07:03.446 --rc geninfo_all_blocks=1 00:07:03.446 --rc geninfo_unexecuted_blocks=1 00:07:03.446 00:07:03.446 ' 00:07:03.446 10:17:35 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2415435 00:07:03.446 10:17:35 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:03.446 10:17:35 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:03.446 10:17:35 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2415435 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@835 -- # '[' -z 2415435 ']' 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.446 10:17:35 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.446 [2024-12-09 10:17:35.849739] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:03.446 [2024-12-09 10:17:35.849822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415435 ] 00:07:03.703 [2024-12-09 10:17:35.914546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.703 [2024-12-09 10:17:35.970304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:03.703 [2024-12-09 10:17:35.970362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2415435' to capture a snapshot of events at runtime. 00:07:03.703 [2024-12-09 10:17:35.970383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:03.703 [2024-12-09 10:17:35.970393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:03.703 [2024-12-09 10:17:35.970403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2415435 for offline analysis/debug. 00:07:03.703 [2024-12-09 10:17:35.970932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.961 10:17:36 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.961 10:17:36 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:03.961 10:17:36 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:03.961 10:17:36 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:03.961 10:17:36 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:03.961 10:17:36 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:03.961 10:17:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.961 10:17:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.961 10:17:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.961 ************************************ 00:07:03.961 START TEST rpc_integrity 00:07:03.961 ************************************ 00:07:03.961 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:03.961 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:03.961 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.961 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.961 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.961 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:03.961 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:03.961 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:03.961 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:03.961 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.961 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.961 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.961 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:03.961 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:03.961 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.961 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.961 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.961 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:03.961 { 00:07:03.961 "name": "Malloc0", 00:07:03.961 "aliases": [ 00:07:03.961 "cfe8b3dc-7b70-45b5-ae38-4cc2ae1880d9" 00:07:03.961 ], 00:07:03.961 "product_name": "Malloc disk", 00:07:03.961 "block_size": 512, 00:07:03.961 "num_blocks": 16384, 00:07:03.961 "uuid": "cfe8b3dc-7b70-45b5-ae38-4cc2ae1880d9", 00:07:03.961 "assigned_rate_limits": { 00:07:03.961 "rw_ios_per_sec": 0, 00:07:03.961 "rw_mbytes_per_sec": 0, 00:07:03.961 "r_mbytes_per_sec": 0, 00:07:03.961 "w_mbytes_per_sec": 0 00:07:03.961 }, 00:07:03.961 "claimed": false, 00:07:03.961 "zoned": false, 00:07:03.961 "supported_io_types": { 00:07:03.961 "read": true, 00:07:03.961 "write": true, 00:07:03.961 "unmap": true, 00:07:03.961 "flush": true, 00:07:03.961 "reset": true, 00:07:03.961 "nvme_admin": false, 00:07:03.961 "nvme_io": false, 00:07:03.961 "nvme_io_md": false, 00:07:03.961 "write_zeroes": true, 00:07:03.961 "zcopy": true, 00:07:03.961 "get_zone_info": false, 00:07:03.961 "zone_management": false, 00:07:03.961 "zone_append": false, 00:07:03.961 "compare": false, 00:07:03.961 "compare_and_write": false, 00:07:03.961 "abort": true, 00:07:03.961 "seek_hole": false, 00:07:03.961 "seek_data": false, 00:07:03.961 "copy": true, 00:07:03.961 "nvme_iov_md": false 00:07:03.961 }, 00:07:03.961 "memory_domains": [ 00:07:03.961 { 00:07:03.961 "dma_device_id": "system", 00:07:03.961 "dma_device_type": 1 00:07:03.961 }, 00:07:03.961 { 00:07:03.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.961 "dma_device_type": 2 00:07:03.961 } 00:07:03.961 ], 00:07:03.961 "driver_specific": {} 00:07:03.961 } 00:07:03.961 ]' 00:07:03.962 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:03.962 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:03.962 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:03.962 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.962 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.962 [2024-12-09 10:17:36.357617] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:03.962 [2024-12-09 10:17:36.357652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:03.962 [2024-12-09 10:17:36.357674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14ca020 00:07:03.962 [2024-12-09 10:17:36.357686] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:03.962 [2024-12-09 10:17:36.358962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:03.962 [2024-12-09 10:17:36.358985] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:03.962 Passthru0 00:07:03.962 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.962 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:03.962 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.962 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:03.962 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.962 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:03.962 { 00:07:03.962 "name": "Malloc0", 00:07:03.962 "aliases": [ 00:07:03.962 "cfe8b3dc-7b70-45b5-ae38-4cc2ae1880d9" 00:07:03.962 ], 00:07:03.962 "product_name": "Malloc disk", 00:07:03.962 "block_size": 512, 00:07:03.962 "num_blocks": 16384, 00:07:03.962 "uuid": "cfe8b3dc-7b70-45b5-ae38-4cc2ae1880d9", 00:07:03.962 "assigned_rate_limits": { 00:07:03.962 "rw_ios_per_sec": 0, 00:07:03.962 "rw_mbytes_per_sec": 0, 00:07:03.962 "r_mbytes_per_sec": 0, 00:07:03.962 "w_mbytes_per_sec": 0 00:07:03.962 }, 00:07:03.962 "claimed": true, 00:07:03.962 "claim_type": "exclusive_write", 00:07:03.962 "zoned": false, 00:07:03.962 "supported_io_types": { 00:07:03.962 "read": true, 00:07:03.962 "write": true, 00:07:03.962 "unmap": true, 00:07:03.962 "flush": true, 00:07:03.962 "reset": true, 00:07:03.962 "nvme_admin": false, 00:07:03.962 "nvme_io": false, 00:07:03.962 "nvme_io_md": false, 00:07:03.962 "write_zeroes": true, 00:07:03.962 "zcopy": true, 00:07:03.962 "get_zone_info": false, 00:07:03.962 "zone_management": false, 00:07:03.962 "zone_append": false, 00:07:03.962 "compare": false, 00:07:03.962 "compare_and_write": false, 00:07:03.962 "abort": true, 00:07:03.962 "seek_hole": false, 00:07:03.962 "seek_data": false, 00:07:03.962 "copy": true, 00:07:03.962 "nvme_iov_md": false 00:07:03.962 }, 00:07:03.962 "memory_domains": [ 00:07:03.962 { 00:07:03.962 "dma_device_id": "system", 00:07:03.962 "dma_device_type": 1 00:07:03.962 }, 00:07:03.962 { 00:07:03.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.962 "dma_device_type": 2 00:07:03.962 } 00:07:03.962 ], 00:07:03.962 "driver_specific": {} 00:07:03.962 }, 00:07:03.962 { 00:07:03.962 "name": "Passthru0", 00:07:03.962 "aliases": [ 00:07:03.962 "eda2489a-d875-5572-b9dd-a4ba182faec2" 00:07:03.962 ], 00:07:03.962 "product_name": "passthru", 00:07:03.962 "block_size": 512, 00:07:03.962 "num_blocks": 16384, 00:07:03.962 "uuid": "eda2489a-d875-5572-b9dd-a4ba182faec2", 00:07:03.962 "assigned_rate_limits": { 00:07:03.962 "rw_ios_per_sec": 0, 00:07:03.962 "rw_mbytes_per_sec": 0, 00:07:03.962 "r_mbytes_per_sec": 0, 00:07:03.962 "w_mbytes_per_sec": 0 00:07:03.962 }, 00:07:03.962 "claimed": false, 00:07:03.962 "zoned": false, 00:07:03.962 "supported_io_types": { 00:07:03.962 "read": true, 00:07:03.962 "write": true, 00:07:03.962 "unmap": true, 00:07:03.962 "flush": true, 00:07:03.962 "reset": true, 00:07:03.962 "nvme_admin": false, 00:07:03.962 "nvme_io": false, 00:07:03.962 "nvme_io_md": false, 00:07:03.962 "write_zeroes": true, 00:07:03.962 "zcopy": true, 00:07:03.962 "get_zone_info": false, 00:07:03.962 "zone_management": false, 00:07:03.962 "zone_append": false, 00:07:03.962 "compare": false, 00:07:03.962 "compare_and_write": false, 00:07:03.962 "abort": true, 00:07:03.962 "seek_hole": false, 00:07:03.962 "seek_data": false, 00:07:03.962 "copy": true, 00:07:03.962 "nvme_iov_md": false 00:07:03.962 }, 00:07:03.962 "memory_domains": [ 00:07:03.962 { 00:07:03.962 "dma_device_id": "system", 00:07:03.962 "dma_device_type": 1 00:07:03.962 }, 00:07:03.962 { 00:07:03.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.962 "dma_device_type": 2 00:07:03.962 } 00:07:03.962 ], 00:07:03.962 "driver_specific": { 00:07:03.962 "passthru": { 00:07:03.962 "name": "Passthru0", 00:07:03.962 "base_bdev_name": "Malloc0" 00:07:03.962 } 00:07:03.962 } 00:07:03.962 } 00:07:03.962 ]' 00:07:03.962 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:04.219 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:04.219 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:04.219 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.219 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.219 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.219 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:04.219 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.219 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.219 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.219 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:04.219 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.219 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.219 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.219 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:04.219 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:04.219 10:17:36 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:04.219 00:07:04.219 real 0m0.212s 00:07:04.219 user 0m0.136s 00:07:04.219 sys 0m0.022s 00:07:04.219 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.219 10:17:36 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.219 ************************************ 00:07:04.219 END TEST rpc_integrity 00:07:04.219 ************************************ 00:07:04.219 10:17:36 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:04.219 10:17:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.219 10:17:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.219 10:17:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.219 ************************************ 00:07:04.219 START TEST rpc_plugins 00:07:04.219 ************************************ 00:07:04.219 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:04.219 10:17:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:04.219 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.219 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.220 10:17:36 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:04.220 10:17:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.220 10:17:36 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:04.220 { 00:07:04.220 "name": "Malloc1", 00:07:04.220 "aliases": [ 00:07:04.220 "62ba2666-1110-4862-b699-7a1589d35597" 00:07:04.220 ], 00:07:04.220 "product_name": "Malloc disk", 00:07:04.220 "block_size": 4096, 00:07:04.220 "num_blocks": 256, 00:07:04.220 "uuid": "62ba2666-1110-4862-b699-7a1589d35597", 00:07:04.220 "assigned_rate_limits": { 00:07:04.220 "rw_ios_per_sec": 0, 00:07:04.220 "rw_mbytes_per_sec": 0, 00:07:04.220 "r_mbytes_per_sec": 0, 00:07:04.220 "w_mbytes_per_sec": 0 00:07:04.220 }, 00:07:04.220 "claimed": false, 00:07:04.220 "zoned": false, 00:07:04.220 "supported_io_types": { 00:07:04.220 "read": true, 00:07:04.220 "write": true, 00:07:04.220 "unmap": true, 00:07:04.220 "flush": true, 00:07:04.220 "reset": true, 00:07:04.220 "nvme_admin": false, 00:07:04.220 "nvme_io": false, 00:07:04.220 "nvme_io_md": false, 00:07:04.220 "write_zeroes": true, 00:07:04.220 "zcopy": true, 00:07:04.220 "get_zone_info": false, 00:07:04.220 "zone_management": false, 00:07:04.220 "zone_append": false, 00:07:04.220 "compare": false, 00:07:04.220 "compare_and_write": false, 00:07:04.220 "abort": true, 00:07:04.220 "seek_hole": false, 00:07:04.220 "seek_data": false, 00:07:04.220 "copy": true, 00:07:04.220 "nvme_iov_md": false 00:07:04.220 }, 00:07:04.220 "memory_domains": [ 00:07:04.220 { 00:07:04.220 "dma_device_id": "system", 00:07:04.220 "dma_device_type": 1 00:07:04.220 }, 00:07:04.220 { 00:07:04.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.220 "dma_device_type": 2 00:07:04.220 } 00:07:04.220 ], 00:07:04.220 "driver_specific": {} 00:07:04.220 } 00:07:04.220 ]' 00:07:04.220 10:17:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:04.220 10:17:36 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:04.220 10:17:36 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.220 10:17:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.220 10:17:36 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:04.220 10:17:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:04.220 10:17:36 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:04.220 00:07:04.220 real 0m0.106s 00:07:04.220 user 0m0.064s 00:07:04.220 sys 0m0.012s 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.220 10:17:36 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:04.220 ************************************ 00:07:04.220 END TEST rpc_plugins 00:07:04.220 ************************************ 00:07:04.220 10:17:36 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:04.220 10:17:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.220 10:17:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.220 10:17:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.477 ************************************ 00:07:04.477 START TEST rpc_trace_cmd_test 00:07:04.477 ************************************ 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:04.477 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2415435", 00:07:04.477 "tpoint_group_mask": "0x8", 00:07:04.477 "iscsi_conn": { 00:07:04.477 "mask": "0x2", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "scsi": { 00:07:04.477 "mask": "0x4", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "bdev": { 00:07:04.477 "mask": "0x8", 00:07:04.477 "tpoint_mask": "0xffffffffffffffff" 00:07:04.477 }, 00:07:04.477 "nvmf_rdma": { 00:07:04.477 "mask": "0x10", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "nvmf_tcp": { 00:07:04.477 "mask": "0x20", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "ftl": { 00:07:04.477 "mask": "0x40", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "blobfs": { 00:07:04.477 "mask": "0x80", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "dsa": { 00:07:04.477 "mask": "0x200", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "thread": { 00:07:04.477 "mask": "0x400", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "nvme_pcie": { 00:07:04.477 "mask": "0x800", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "iaa": { 00:07:04.477 "mask": "0x1000", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "nvme_tcp": { 00:07:04.477 "mask": "0x2000", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "bdev_nvme": { 00:07:04.477 "mask": "0x4000", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "sock": { 00:07:04.477 "mask": "0x8000", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "blob": { 00:07:04.477 "mask": "0x10000", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "bdev_raid": { 00:07:04.477 "mask": "0x20000", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 }, 00:07:04.477 "scheduler": { 00:07:04.477 "mask": "0x40000", 00:07:04.477 "tpoint_mask": "0x0" 00:07:04.477 } 00:07:04.477 }' 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:04.477 00:07:04.477 real 0m0.183s 00:07:04.477 user 0m0.157s 00:07:04.477 sys 0m0.017s 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.477 10:17:36 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.478 ************************************ 00:07:04.478 END TEST rpc_trace_cmd_test 00:07:04.478 ************************************ 00:07:04.478 10:17:36 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:04.478 10:17:36 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:04.478 10:17:36 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:04.478 10:17:36 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.478 10:17:36 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.478 10:17:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.478 ************************************ 00:07:04.478 START TEST rpc_daemon_integrity 00:07:04.478 ************************************ 00:07:04.478 10:17:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:04.478 10:17:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:04.478 10:17:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.478 10:17:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.478 10:17:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.478 10:17:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:04.478 10:17:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:04.736 { 00:07:04.736 "name": "Malloc2", 00:07:04.736 "aliases": [ 00:07:04.736 "59c9e5db-6d26-4f34-9f10-54114c01c425" 00:07:04.736 ], 00:07:04.736 "product_name": "Malloc disk", 00:07:04.736 "block_size": 512, 00:07:04.736 "num_blocks": 16384, 00:07:04.736 "uuid": "59c9e5db-6d26-4f34-9f10-54114c01c425", 00:07:04.736 "assigned_rate_limits": { 00:07:04.736 "rw_ios_per_sec": 0, 00:07:04.736 "rw_mbytes_per_sec": 0, 00:07:04.736 "r_mbytes_per_sec": 0, 00:07:04.736 "w_mbytes_per_sec": 0 00:07:04.736 }, 00:07:04.736 "claimed": false, 00:07:04.736 "zoned": false, 00:07:04.736 "supported_io_types": { 00:07:04.736 "read": true, 00:07:04.736 "write": true, 00:07:04.736 "unmap": true, 00:07:04.736 "flush": true, 00:07:04.736 "reset": true, 00:07:04.736 "nvme_admin": false, 00:07:04.736 "nvme_io": false, 00:07:04.736 "nvme_io_md": false, 00:07:04.736 "write_zeroes": true, 00:07:04.736 "zcopy": true, 00:07:04.736 "get_zone_info": false, 00:07:04.736 "zone_management": false, 00:07:04.736 "zone_append": false, 00:07:04.736 "compare": false, 00:07:04.736 "compare_and_write": false, 00:07:04.736 "abort": true, 00:07:04.736 "seek_hole": false, 00:07:04.736 "seek_data": false, 00:07:04.736 "copy": true, 00:07:04.736 "nvme_iov_md": false 00:07:04.736 }, 00:07:04.736 "memory_domains": [ 00:07:04.736 { 00:07:04.736 "dma_device_id": "system", 00:07:04.736 "dma_device_type": 1 00:07:04.736 }, 00:07:04.736 { 00:07:04.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.736 "dma_device_type": 2 00:07:04.736 } 00:07:04.736 ], 00:07:04.736 "driver_specific": {} 00:07:04.736 } 00:07:04.736 ]' 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:04.736 10:17:36 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.736 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.736 [2024-12-09 10:17:37.003712] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:04.736 [2024-12-09 10:17:37.003762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:04.736 [2024-12-09 10:17:37.003791] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1419320 00:07:04.736 [2024-12-09 10:17:37.003804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:04.736 [2024-12-09 10:17:37.004952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:04.736 [2024-12-09 10:17:37.004975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:04.736 Passthru0 00:07:04.736 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.736 10:17:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:04.736 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.736 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.736 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.736 10:17:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:04.736 { 00:07:04.736 "name": "Malloc2", 00:07:04.736 "aliases": [ 00:07:04.736 "59c9e5db-6d26-4f34-9f10-54114c01c425" 00:07:04.736 ], 00:07:04.736 "product_name": "Malloc disk", 00:07:04.736 "block_size": 512, 00:07:04.736 "num_blocks": 16384, 00:07:04.736 "uuid": "59c9e5db-6d26-4f34-9f10-54114c01c425", 00:07:04.736 "assigned_rate_limits": { 00:07:04.736 "rw_ios_per_sec": 0, 00:07:04.736 "rw_mbytes_per_sec": 0, 00:07:04.736 "r_mbytes_per_sec": 0, 00:07:04.736 "w_mbytes_per_sec": 0 00:07:04.736 }, 00:07:04.736 "claimed": true, 00:07:04.736 "claim_type": "exclusive_write", 00:07:04.736 "zoned": false, 00:07:04.736 "supported_io_types": { 00:07:04.736 "read": true, 00:07:04.736 "write": true, 00:07:04.736 "unmap": true, 00:07:04.736 "flush": true, 00:07:04.736 "reset": true, 00:07:04.736 "nvme_admin": false, 00:07:04.736 "nvme_io": false, 00:07:04.736 "nvme_io_md": false, 00:07:04.736 "write_zeroes": true, 00:07:04.736 "zcopy": true, 00:07:04.736 "get_zone_info": false, 00:07:04.736 "zone_management": false, 00:07:04.736 "zone_append": false, 00:07:04.736 "compare": false, 00:07:04.736 "compare_and_write": false, 00:07:04.736 "abort": true, 00:07:04.736 "seek_hole": false, 00:07:04.736 "seek_data": false, 00:07:04.736 "copy": true, 00:07:04.736 "nvme_iov_md": false 00:07:04.736 }, 00:07:04.736 "memory_domains": [ 00:07:04.736 { 00:07:04.736 "dma_device_id": "system", 00:07:04.736 "dma_device_type": 1 00:07:04.736 }, 00:07:04.736 { 00:07:04.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.736 "dma_device_type": 2 00:07:04.736 } 00:07:04.736 ], 00:07:04.736 "driver_specific": {} 00:07:04.736 }, 00:07:04.736 { 00:07:04.736 "name": "Passthru0", 00:07:04.736 "aliases": [ 00:07:04.736 "84c1a1b7-f015-55ce-89b6-c13afdeadb41" 00:07:04.736 ], 00:07:04.736 "product_name": "passthru", 00:07:04.736 "block_size": 512, 00:07:04.736 "num_blocks": 16384, 00:07:04.736 "uuid": "84c1a1b7-f015-55ce-89b6-c13afdeadb41", 00:07:04.736 "assigned_rate_limits": { 00:07:04.736 "rw_ios_per_sec": 0, 00:07:04.737 "rw_mbytes_per_sec": 0, 00:07:04.737 "r_mbytes_per_sec": 0, 00:07:04.737 "w_mbytes_per_sec": 0 00:07:04.737 }, 00:07:04.737 "claimed": false, 00:07:04.737 "zoned": false, 00:07:04.737 "supported_io_types": { 00:07:04.737 "read": true, 00:07:04.737 "write": true, 00:07:04.737 "unmap": true, 00:07:04.737 "flush": true, 00:07:04.737 "reset": true, 00:07:04.737 "nvme_admin": false, 00:07:04.737 "nvme_io": false, 00:07:04.737 "nvme_io_md": false, 00:07:04.737 "write_zeroes": true, 00:07:04.737 "zcopy": true, 00:07:04.737 "get_zone_info": false, 00:07:04.737 "zone_management": false, 00:07:04.737 "zone_append": false, 00:07:04.737 "compare": false, 00:07:04.737 "compare_and_write": false, 00:07:04.737 "abort": true, 00:07:04.737 "seek_hole": false, 00:07:04.737 "seek_data": false, 00:07:04.737 "copy": true, 00:07:04.737 "nvme_iov_md": false 00:07:04.737 }, 00:07:04.737 "memory_domains": [ 00:07:04.737 { 00:07:04.737 "dma_device_id": "system", 00:07:04.737 "dma_device_type": 1 00:07:04.737 }, 00:07:04.737 { 00:07:04.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.737 "dma_device_type": 2 00:07:04.737 } 00:07:04.737 ], 00:07:04.737 "driver_specific": { 00:07:04.737 "passthru": { 00:07:04.737 "name": "Passthru0", 00:07:04.737 "base_bdev_name": "Malloc2" 00:07:04.737 } 00:07:04.737 } 00:07:04.737 } 00:07:04.737 ]' 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:04.737 00:07:04.737 real 0m0.219s 00:07:04.737 user 0m0.143s 00:07:04.737 sys 0m0.023s 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.737 10:17:37 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:04.737 ************************************ 00:07:04.737 END TEST rpc_daemon_integrity 00:07:04.737 ************************************ 00:07:04.737 10:17:37 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:04.737 10:17:37 rpc -- rpc/rpc.sh@84 -- # killprocess 2415435 00:07:04.737 10:17:37 rpc -- common/autotest_common.sh@954 -- # '[' -z 2415435 ']' 00:07:04.737 10:17:37 rpc -- common/autotest_common.sh@958 -- # kill -0 2415435 00:07:04.737 10:17:37 rpc -- common/autotest_common.sh@959 -- # uname 00:07:04.737 10:17:37 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.737 10:17:37 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2415435 00:07:04.994 10:17:37 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.994 10:17:37 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.994 10:17:37 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2415435' 00:07:04.994 killing process with pid 2415435 00:07:04.994 10:17:37 rpc -- common/autotest_common.sh@973 -- # kill 2415435 00:07:04.994 10:17:37 rpc -- common/autotest_common.sh@978 -- # wait 2415435 00:07:05.252 00:07:05.252 real 0m1.976s 00:07:05.252 user 0m2.427s 00:07:05.252 sys 0m0.598s 00:07:05.252 10:17:37 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.252 10:17:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.252 ************************************ 00:07:05.252 END TEST rpc 00:07:05.252 ************************************ 00:07:05.252 10:17:37 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:05.252 10:17:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.252 10:17:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.252 10:17:37 -- common/autotest_common.sh@10 -- # set +x 00:07:05.252 ************************************ 00:07:05.252 START TEST skip_rpc 00:07:05.252 ************************************ 00:07:05.252 10:17:37 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:05.511 * Looking for test storage... 00:07:05.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.511 10:17:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.511 --rc genhtml_branch_coverage=1 00:07:05.511 --rc genhtml_function_coverage=1 00:07:05.511 --rc genhtml_legend=1 00:07:05.511 --rc geninfo_all_blocks=1 00:07:05.511 --rc geninfo_unexecuted_blocks=1 00:07:05.511 00:07:05.511 ' 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.511 --rc genhtml_branch_coverage=1 00:07:05.511 --rc genhtml_function_coverage=1 00:07:05.511 --rc genhtml_legend=1 00:07:05.511 --rc geninfo_all_blocks=1 00:07:05.511 --rc geninfo_unexecuted_blocks=1 00:07:05.511 00:07:05.511 ' 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.511 --rc genhtml_branch_coverage=1 00:07:05.511 --rc genhtml_function_coverage=1 00:07:05.511 --rc genhtml_legend=1 00:07:05.511 --rc geninfo_all_blocks=1 00:07:05.511 --rc geninfo_unexecuted_blocks=1 00:07:05.511 00:07:05.511 ' 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:05.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.511 --rc genhtml_branch_coverage=1 00:07:05.511 --rc genhtml_function_coverage=1 00:07:05.511 --rc genhtml_legend=1 00:07:05.511 --rc geninfo_all_blocks=1 00:07:05.511 --rc geninfo_unexecuted_blocks=1 00:07:05.511 00:07:05.511 ' 00:07:05.511 10:17:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:05.511 10:17:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:05.511 10:17:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.511 10:17:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.511 ************************************ 00:07:05.511 START TEST skip_rpc 00:07:05.511 ************************************ 00:07:05.511 10:17:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:05.511 10:17:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2415767 00:07:05.511 10:17:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:05.511 10:17:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:05.511 10:17:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:05.512 [2024-12-09 10:17:37.907986] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:05.512 [2024-12-09 10:17:37.908065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2415767 ] 00:07:05.769 [2024-12-09 10:17:37.973686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.769 [2024-12-09 10:17:38.036609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2415767 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2415767 ']' 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2415767 00:07:11.031 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:11.032 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.032 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2415767 00:07:11.032 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.032 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.032 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2415767' 00:07:11.032 killing process with pid 2415767 00:07:11.032 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2415767 00:07:11.032 10:17:42 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2415767 00:07:11.032 00:07:11.032 real 0m5.494s 00:07:11.032 user 0m5.182s 00:07:11.032 sys 0m0.329s 00:07:11.032 10:17:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.032 10:17:43 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.032 ************************************ 00:07:11.032 END TEST skip_rpc 00:07:11.032 ************************************ 00:07:11.032 10:17:43 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:11.032 10:17:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.032 10:17:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.032 10:17:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.032 ************************************ 00:07:11.032 START TEST skip_rpc_with_json 00:07:11.032 ************************************ 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2416460 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2416460 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2416460 ']' 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.032 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.032 [2024-12-09 10:17:43.452534] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:11.032 [2024-12-09 10:17:43.452628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2416460 ] 00:07:11.290 [2024-12-09 10:17:43.515932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.290 [2024-12-09 10:17:43.569503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.546 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.546 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:11.546 10:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:11.546 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.546 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.546 [2024-12-09 10:17:43.843148] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:11.546 request: 00:07:11.546 { 00:07:11.546 "trtype": "tcp", 00:07:11.546 "method": "nvmf_get_transports", 00:07:11.546 "req_id": 1 00:07:11.546 } 00:07:11.547 Got JSON-RPC error response 00:07:11.547 response: 00:07:11.547 { 00:07:11.547 "code": -19, 00:07:11.547 "message": "No such device" 00:07:11.547 } 00:07:11.547 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:11.547 10:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:11.547 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.547 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.547 [2024-12-09 10:17:43.851266] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:11.547 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.547 10:17:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:11.547 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.547 10:17:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:11.803 10:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.803 10:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:11.803 { 00:07:11.803 "subsystems": [ 00:07:11.803 { 00:07:11.803 "subsystem": "fsdev", 00:07:11.803 "config": [ 00:07:11.803 { 00:07:11.803 "method": "fsdev_set_opts", 00:07:11.803 "params": { 00:07:11.803 "fsdev_io_pool_size": 65535, 00:07:11.803 "fsdev_io_cache_size": 256 00:07:11.803 } 00:07:11.803 } 00:07:11.803 ] 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "subsystem": "vfio_user_target", 00:07:11.803 "config": null 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "subsystem": "keyring", 00:07:11.803 "config": [] 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "subsystem": "iobuf", 00:07:11.803 "config": [ 00:07:11.803 { 00:07:11.803 "method": "iobuf_set_options", 00:07:11.803 "params": { 00:07:11.803 "small_pool_count": 8192, 00:07:11.803 "large_pool_count": 1024, 00:07:11.803 "small_bufsize": 8192, 00:07:11.803 "large_bufsize": 135168, 00:07:11.803 "enable_numa": false 00:07:11.803 } 00:07:11.803 } 00:07:11.803 ] 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "subsystem": "sock", 00:07:11.803 "config": [ 00:07:11.803 { 00:07:11.803 "method": "sock_set_default_impl", 00:07:11.803 "params": { 00:07:11.803 "impl_name": "posix" 00:07:11.803 } 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "method": "sock_impl_set_options", 00:07:11.803 "params": { 00:07:11.803 "impl_name": "ssl", 00:07:11.803 "recv_buf_size": 4096, 00:07:11.803 "send_buf_size": 4096, 00:07:11.803 "enable_recv_pipe": true, 00:07:11.803 "enable_quickack": false, 00:07:11.803 "enable_placement_id": 0, 00:07:11.803 "enable_zerocopy_send_server": true, 00:07:11.803 "enable_zerocopy_send_client": false, 00:07:11.803 "zerocopy_threshold": 0, 00:07:11.803 "tls_version": 0, 00:07:11.803 "enable_ktls": false 00:07:11.803 } 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "method": "sock_impl_set_options", 00:07:11.803 "params": { 00:07:11.803 "impl_name": "posix", 00:07:11.803 "recv_buf_size": 2097152, 00:07:11.803 "send_buf_size": 2097152, 00:07:11.803 "enable_recv_pipe": true, 00:07:11.803 "enable_quickack": false, 00:07:11.803 "enable_placement_id": 0, 00:07:11.803 "enable_zerocopy_send_server": true, 00:07:11.803 "enable_zerocopy_send_client": false, 00:07:11.803 "zerocopy_threshold": 0, 00:07:11.803 "tls_version": 0, 00:07:11.803 "enable_ktls": false 00:07:11.803 } 00:07:11.803 } 00:07:11.803 ] 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "subsystem": "vmd", 00:07:11.803 "config": [] 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "subsystem": "accel", 00:07:11.803 "config": [ 00:07:11.803 { 00:07:11.803 "method": "accel_set_options", 00:07:11.803 "params": { 00:07:11.803 "small_cache_size": 128, 00:07:11.803 "large_cache_size": 16, 00:07:11.803 "task_count": 2048, 00:07:11.803 "sequence_count": 2048, 00:07:11.803 "buf_count": 2048 00:07:11.803 } 00:07:11.803 } 00:07:11.803 ] 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "subsystem": "bdev", 00:07:11.803 "config": [ 00:07:11.803 { 00:07:11.803 "method": "bdev_set_options", 00:07:11.803 "params": { 00:07:11.803 "bdev_io_pool_size": 65535, 00:07:11.803 "bdev_io_cache_size": 256, 00:07:11.803 "bdev_auto_examine": true, 00:07:11.803 "iobuf_small_cache_size": 128, 00:07:11.803 "iobuf_large_cache_size": 16 00:07:11.803 } 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "method": "bdev_raid_set_options", 00:07:11.803 "params": { 00:07:11.803 "process_window_size_kb": 1024, 00:07:11.803 "process_max_bandwidth_mb_sec": 0 00:07:11.803 } 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "method": "bdev_iscsi_set_options", 00:07:11.803 "params": { 00:07:11.803 "timeout_sec": 30 00:07:11.803 } 00:07:11.803 }, 00:07:11.803 { 00:07:11.803 "method": "bdev_nvme_set_options", 00:07:11.804 "params": { 00:07:11.804 "action_on_timeout": "none", 00:07:11.804 "timeout_us": 0, 00:07:11.804 "timeout_admin_us": 0, 00:07:11.804 "keep_alive_timeout_ms": 10000, 00:07:11.804 "arbitration_burst": 0, 00:07:11.804 "low_priority_weight": 0, 00:07:11.804 "medium_priority_weight": 0, 00:07:11.804 "high_priority_weight": 0, 00:07:11.804 "nvme_adminq_poll_period_us": 10000, 00:07:11.804 "nvme_ioq_poll_period_us": 0, 00:07:11.804 "io_queue_requests": 0, 00:07:11.804 "delay_cmd_submit": true, 00:07:11.804 "transport_retry_count": 4, 00:07:11.804 "bdev_retry_count": 3, 00:07:11.804 "transport_ack_timeout": 0, 00:07:11.804 "ctrlr_loss_timeout_sec": 0, 00:07:11.804 "reconnect_delay_sec": 0, 00:07:11.804 "fast_io_fail_timeout_sec": 0, 00:07:11.804 "disable_auto_failback": false, 00:07:11.804 "generate_uuids": false, 00:07:11.804 "transport_tos": 0, 00:07:11.804 "nvme_error_stat": false, 00:07:11.804 "rdma_srq_size": 0, 00:07:11.804 "io_path_stat": false, 00:07:11.804 "allow_accel_sequence": false, 00:07:11.804 "rdma_max_cq_size": 0, 00:07:11.804 "rdma_cm_event_timeout_ms": 0, 00:07:11.804 "dhchap_digests": [ 00:07:11.804 "sha256", 00:07:11.804 "sha384", 00:07:11.804 "sha512" 00:07:11.804 ], 00:07:11.804 "dhchap_dhgroups": [ 00:07:11.804 "null", 00:07:11.804 "ffdhe2048", 00:07:11.804 "ffdhe3072", 00:07:11.804 "ffdhe4096", 00:07:11.804 "ffdhe6144", 00:07:11.804 "ffdhe8192" 00:07:11.804 ] 00:07:11.804 } 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "method": "bdev_nvme_set_hotplug", 00:07:11.804 "params": { 00:07:11.804 "period_us": 100000, 00:07:11.804 "enable": false 00:07:11.804 } 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "method": "bdev_wait_for_examine" 00:07:11.804 } 00:07:11.804 ] 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "subsystem": "scsi", 00:07:11.804 "config": null 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "subsystem": "scheduler", 00:07:11.804 "config": [ 00:07:11.804 { 00:07:11.804 "method": "framework_set_scheduler", 00:07:11.804 "params": { 00:07:11.804 "name": "static" 00:07:11.804 } 00:07:11.804 } 00:07:11.804 ] 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "subsystem": "vhost_scsi", 00:07:11.804 "config": [] 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "subsystem": "vhost_blk", 00:07:11.804 "config": [] 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "subsystem": "ublk", 00:07:11.804 "config": [] 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "subsystem": "nbd", 00:07:11.804 "config": [] 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "subsystem": "nvmf", 00:07:11.804 "config": [ 00:07:11.804 { 00:07:11.804 "method": "nvmf_set_config", 00:07:11.804 "params": { 00:07:11.804 "discovery_filter": "match_any", 00:07:11.804 "admin_cmd_passthru": { 00:07:11.804 "identify_ctrlr": false 00:07:11.804 }, 00:07:11.804 "dhchap_digests": [ 00:07:11.804 "sha256", 00:07:11.804 "sha384", 00:07:11.804 "sha512" 00:07:11.804 ], 00:07:11.804 "dhchap_dhgroups": [ 00:07:11.804 "null", 00:07:11.804 "ffdhe2048", 00:07:11.804 "ffdhe3072", 00:07:11.804 "ffdhe4096", 00:07:11.804 "ffdhe6144", 00:07:11.804 "ffdhe8192" 00:07:11.804 ] 00:07:11.804 } 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "method": "nvmf_set_max_subsystems", 00:07:11.804 "params": { 00:07:11.804 "max_subsystems": 1024 00:07:11.804 } 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "method": "nvmf_set_crdt", 00:07:11.804 "params": { 00:07:11.804 "crdt1": 0, 00:07:11.804 "crdt2": 0, 00:07:11.804 "crdt3": 0 00:07:11.804 } 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "method": "nvmf_create_transport", 00:07:11.804 "params": { 00:07:11.804 "trtype": "TCP", 00:07:11.804 "max_queue_depth": 128, 00:07:11.804 "max_io_qpairs_per_ctrlr": 127, 00:07:11.804 "in_capsule_data_size": 4096, 00:07:11.804 "max_io_size": 131072, 00:07:11.804 "io_unit_size": 131072, 00:07:11.804 "max_aq_depth": 128, 00:07:11.804 "num_shared_buffers": 511, 00:07:11.804 "buf_cache_size": 4294967295, 00:07:11.804 "dif_insert_or_strip": false, 00:07:11.804 "zcopy": false, 00:07:11.804 "c2h_success": true, 00:07:11.804 "sock_priority": 0, 00:07:11.804 "abort_timeout_sec": 1, 00:07:11.804 "ack_timeout": 0, 00:07:11.804 "data_wr_pool_size": 0 00:07:11.804 } 00:07:11.804 } 00:07:11.804 ] 00:07:11.804 }, 00:07:11.804 { 00:07:11.804 "subsystem": "iscsi", 00:07:11.804 "config": [ 00:07:11.804 { 00:07:11.804 "method": "iscsi_set_options", 00:07:11.804 "params": { 00:07:11.804 "node_base": "iqn.2016-06.io.spdk", 00:07:11.804 "max_sessions": 128, 00:07:11.804 "max_connections_per_session": 2, 00:07:11.804 "max_queue_depth": 64, 00:07:11.804 "default_time2wait": 2, 00:07:11.804 "default_time2retain": 20, 00:07:11.804 "first_burst_length": 8192, 00:07:11.804 "immediate_data": true, 00:07:11.804 "allow_duplicated_isid": false, 00:07:11.804 "error_recovery_level": 0, 00:07:11.804 "nop_timeout": 60, 00:07:11.804 "nop_in_interval": 30, 00:07:11.804 "disable_chap": false, 00:07:11.804 "require_chap": false, 00:07:11.804 "mutual_chap": false, 00:07:11.804 "chap_group": 0, 00:07:11.804 "max_large_datain_per_connection": 64, 00:07:11.804 "max_r2t_per_connection": 4, 00:07:11.804 "pdu_pool_size": 36864, 00:07:11.804 "immediate_data_pool_size": 16384, 00:07:11.804 "data_out_pool_size": 2048 00:07:11.804 } 00:07:11.804 } 00:07:11.804 ] 00:07:11.804 } 00:07:11.804 ] 00:07:11.804 } 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2416460 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2416460 ']' 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2416460 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416460 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416460' 00:07:11.804 killing process with pid 2416460 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2416460 00:07:11.804 10:17:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2416460 00:07:12.061 10:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2416600 00:07:12.061 10:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:12.061 10:17:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:17.321 10:17:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2416600 00:07:17.321 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2416600 ']' 00:07:17.321 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2416600 00:07:17.321 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:17.321 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.321 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2416600 00:07:17.321 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.321 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.321 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2416600' 00:07:17.321 killing process with pid 2416600 00:07:17.321 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2416600 00:07:17.321 10:17:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2416600 00:07:17.579 10:17:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:17.579 10:17:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:17.579 00:07:17.579 real 0m6.610s 00:07:17.579 user 0m6.266s 00:07:17.579 sys 0m0.670s 00:07:17.579 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.579 10:17:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:17.579 ************************************ 00:07:17.579 END TEST skip_rpc_with_json 00:07:17.579 ************************************ 00:07:17.837 10:17:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:17.837 10:17:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.837 10:17:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.837 10:17:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.837 ************************************ 00:07:17.837 START TEST skip_rpc_with_delay 00:07:17.837 ************************************ 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:17.837 [2024-12-09 10:17:50.112113] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:17.837 00:07:17.837 real 0m0.073s 00:07:17.837 user 0m0.042s 00:07:17.837 sys 0m0.031s 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.837 10:17:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:17.837 ************************************ 00:07:17.837 END TEST skip_rpc_with_delay 00:07:17.837 ************************************ 00:07:17.837 10:17:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:17.837 10:17:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:17.837 10:17:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:17.837 10:17:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:17.837 10:17:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.837 10:17:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.837 ************************************ 00:07:17.837 START TEST exit_on_failed_rpc_init 00:07:17.837 ************************************ 00:07:17.837 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:07:17.837 10:17:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2417313 00:07:17.837 10:17:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:17.837 10:17:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2417313 00:07:17.837 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2417313 ']' 00:07:17.837 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.837 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.837 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.837 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.837 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:17.837 [2024-12-09 10:17:50.240264] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:17.837 [2024-12-09 10:17:50.240355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2417313 ] 00:07:18.095 [2024-12-09 10:17:50.307891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.096 [2024-12-09 10:17:50.366993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:18.353 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:18.353 [2024-12-09 10:17:50.707335] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:18.353 [2024-12-09 10:17:50.707413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2417444 ] 00:07:18.353 [2024-12-09 10:17:50.773195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.609 [2024-12-09 10:17:50.834633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.609 [2024-12-09 10:17:50.834750] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:18.609 [2024-12-09 10:17:50.834771] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:18.609 [2024-12-09 10:17:50.834782] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2417313 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2417313 ']' 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2417313 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2417313 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2417313' 00:07:18.609 killing process with pid 2417313 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2417313 00:07:18.609 10:17:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2417313 00:07:19.175 00:07:19.175 real 0m1.272s 00:07:19.175 user 0m1.415s 00:07:19.175 sys 0m0.457s 00:07:19.175 10:17:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.175 10:17:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:19.175 ************************************ 00:07:19.175 END TEST exit_on_failed_rpc_init 00:07:19.175 ************************************ 00:07:19.175 10:17:51 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:19.175 00:07:19.175 real 0m13.804s 00:07:19.175 user 0m13.078s 00:07:19.175 sys 0m1.689s 00:07:19.175 10:17:51 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.175 10:17:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.175 ************************************ 00:07:19.175 END TEST skip_rpc 00:07:19.175 ************************************ 00:07:19.175 10:17:51 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:19.175 10:17:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.175 10:17:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.175 10:17:51 -- common/autotest_common.sh@10 -- # set +x 00:07:19.175 ************************************ 00:07:19.175 START TEST rpc_client 00:07:19.175 ************************************ 00:07:19.175 10:17:51 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:19.175 * Looking for test storage... 00:07:19.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:19.175 10:17:51 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.175 10:17:51 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.175 10:17:51 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.433 10:17:51 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.433 10:17:51 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.434 10:17:51 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:19.434 10:17:51 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.434 10:17:51 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.434 --rc genhtml_branch_coverage=1 00:07:19.434 --rc genhtml_function_coverage=1 00:07:19.434 --rc genhtml_legend=1 00:07:19.434 --rc geninfo_all_blocks=1 00:07:19.434 --rc geninfo_unexecuted_blocks=1 00:07:19.434 00:07:19.434 ' 00:07:19.434 10:17:51 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.434 --rc genhtml_branch_coverage=1 00:07:19.434 --rc genhtml_function_coverage=1 00:07:19.434 --rc genhtml_legend=1 00:07:19.434 --rc geninfo_all_blocks=1 00:07:19.434 --rc geninfo_unexecuted_blocks=1 00:07:19.434 00:07:19.434 ' 00:07:19.434 10:17:51 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.434 --rc genhtml_branch_coverage=1 00:07:19.434 --rc genhtml_function_coverage=1 00:07:19.434 --rc genhtml_legend=1 00:07:19.434 --rc geninfo_all_blocks=1 00:07:19.434 --rc geninfo_unexecuted_blocks=1 00:07:19.434 00:07:19.434 ' 00:07:19.434 10:17:51 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.434 --rc genhtml_branch_coverage=1 00:07:19.434 --rc genhtml_function_coverage=1 00:07:19.434 --rc genhtml_legend=1 00:07:19.434 --rc geninfo_all_blocks=1 00:07:19.434 --rc geninfo_unexecuted_blocks=1 00:07:19.434 00:07:19.434 ' 00:07:19.434 10:17:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:19.434 OK 00:07:19.434 10:17:51 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:19.434 00:07:19.434 real 0m0.155s 00:07:19.434 user 0m0.096s 00:07:19.434 sys 0m0.069s 00:07:19.434 10:17:51 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:19.434 10:17:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:19.434 ************************************ 00:07:19.434 END TEST rpc_client 00:07:19.434 ************************************ 00:07:19.434 10:17:51 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:19.434 10:17:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:19.434 10:17:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:19.434 10:17:51 -- common/autotest_common.sh@10 -- # set +x 00:07:19.434 ************************************ 00:07:19.434 START TEST json_config 00:07:19.434 ************************************ 00:07:19.434 10:17:51 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:19.434 10:17:51 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:19.434 10:17:51 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:07:19.434 10:17:51 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:19.434 10:17:51 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:19.434 10:17:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.434 10:17:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.434 10:17:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.434 10:17:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.434 10:17:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.434 10:17:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.434 10:17:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.434 10:17:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.434 10:17:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.434 10:17:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.434 10:17:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.434 10:17:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:19.434 10:17:51 json_config -- scripts/common.sh@345 -- # : 1 00:07:19.434 10:17:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.434 10:17:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.434 10:17:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:19.434 10:17:51 json_config -- scripts/common.sh@353 -- # local d=1 00:07:19.434 10:17:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.434 10:17:51 json_config -- scripts/common.sh@355 -- # echo 1 00:07:19.434 10:17:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.434 10:17:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:19.434 10:17:51 json_config -- scripts/common.sh@353 -- # local d=2 00:07:19.434 10:17:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.434 10:17:51 json_config -- scripts/common.sh@355 -- # echo 2 00:07:19.434 10:17:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.434 10:17:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.434 10:17:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.434 10:17:51 json_config -- scripts/common.sh@368 -- # return 0 00:07:19.434 10:17:51 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.434 10:17:51 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.434 --rc genhtml_branch_coverage=1 00:07:19.434 --rc genhtml_function_coverage=1 00:07:19.434 --rc genhtml_legend=1 00:07:19.434 --rc geninfo_all_blocks=1 00:07:19.434 --rc geninfo_unexecuted_blocks=1 00:07:19.434 00:07:19.434 ' 00:07:19.434 10:17:51 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.434 --rc genhtml_branch_coverage=1 00:07:19.434 --rc genhtml_function_coverage=1 00:07:19.434 --rc genhtml_legend=1 00:07:19.434 --rc geninfo_all_blocks=1 00:07:19.434 --rc geninfo_unexecuted_blocks=1 00:07:19.434 00:07:19.434 ' 00:07:19.434 10:17:51 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.434 --rc genhtml_branch_coverage=1 00:07:19.434 --rc genhtml_function_coverage=1 00:07:19.434 --rc genhtml_legend=1 00:07:19.434 --rc geninfo_all_blocks=1 00:07:19.434 --rc geninfo_unexecuted_blocks=1 00:07:19.434 00:07:19.434 ' 00:07:19.434 10:17:51 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:19.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.434 --rc genhtml_branch_coverage=1 00:07:19.434 --rc genhtml_function_coverage=1 00:07:19.434 --rc genhtml_legend=1 00:07:19.434 --rc geninfo_all_blocks=1 00:07:19.434 --rc geninfo_unexecuted_blocks=1 00:07:19.434 00:07:19.434 ' 00:07:19.434 10:17:51 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:19.434 10:17:51 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:19.434 10:17:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:19.434 10:17:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:19.434 10:17:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:19.434 10:17:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:19.434 10:17:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.434 10:17:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.435 10:17:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.435 10:17:51 json_config -- paths/export.sh@5 -- # export PATH 00:07:19.435 10:17:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:19.435 10:17:51 json_config -- nvmf/common.sh@51 -- # : 0 00:07:19.435 10:17:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:19.435 10:17:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:19.435 10:17:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:19.435 10:17:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:19.435 10:17:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:19.435 10:17:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:19.435 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:19.435 10:17:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:19.435 10:17:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:19.435 10:17:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:19.435 INFO: JSON configuration test init 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:19.435 10:17:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.435 10:17:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:19.435 10:17:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.435 10:17:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.435 10:17:51 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:19.435 10:17:51 json_config -- json_config/common.sh@9 -- # local app=target 00:07:19.435 10:17:51 json_config -- json_config/common.sh@10 -- # shift 00:07:19.435 10:17:51 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:19.435 10:17:51 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:19.435 10:17:51 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:19.435 10:17:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:19.435 10:17:51 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:19.435 10:17:51 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2417705 00:07:19.435 10:17:51 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:19.435 10:17:51 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:19.435 Waiting for target to run... 00:07:19.435 10:17:51 json_config -- json_config/common.sh@25 -- # waitforlisten 2417705 /var/tmp/spdk_tgt.sock 00:07:19.435 10:17:51 json_config -- common/autotest_common.sh@835 -- # '[' -z 2417705 ']' 00:07:19.435 10:17:51 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:19.435 10:17:51 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.435 10:17:51 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:19.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:19.435 10:17:51 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.435 10:17:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:19.695 [2024-12-09 10:17:51.920696] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:19.695 [2024-12-09 10:17:51.920778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2417705 ] 00:07:20.263 [2024-12-09 10:17:52.496337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.263 [2024-12-09 10:17:52.549855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.521 10:17:52 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.521 10:17:52 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:20.521 10:17:52 json_config -- json_config/common.sh@26 -- # echo '' 00:07:20.521 00:07:20.521 10:17:52 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:20.521 10:17:52 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:20.521 10:17:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:20.521 10:17:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 10:17:52 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:20.521 10:17:52 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:20.521 10:17:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.521 10:17:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:20.521 10:17:52 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:20.521 10:17:52 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:20.521 10:17:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:23.828 10:17:56 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:23.828 10:17:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:23.828 10:17:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:23.828 10:17:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:23.828 10:17:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:23.828 10:17:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:23.828 10:17:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:23.828 10:17:56 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:23.828 10:17:56 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:23.828 10:17:56 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:23.828 10:17:56 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:23.828 10:17:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@54 -- # sort 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:24.145 10:17:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:24.145 10:17:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:24.145 10:17:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.145 10:17:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:24.145 10:17:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:24.145 10:17:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:24.428 MallocForNvmf0 00:07:24.428 10:17:56 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:24.428 10:17:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:24.686 MallocForNvmf1 00:07:24.686 10:17:56 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:24.686 10:17:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:24.944 [2024-12-09 10:17:57.198023] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:24.944 10:17:57 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:24.944 10:17:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:25.201 10:17:57 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:25.201 10:17:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:25.457 10:17:57 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:25.458 10:17:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:25.715 10:17:57 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:25.715 10:17:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:25.972 [2024-12-09 10:17:58.237429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:25.972 10:17:58 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:25.972 10:17:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.972 10:17:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.972 10:17:58 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:25.972 10:17:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.972 10:17:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:25.972 10:17:58 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:25.972 10:17:58 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:25.972 10:17:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:26.229 MallocBdevForConfigChangeCheck 00:07:26.229 10:17:58 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:26.229 10:17:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:26.229 10:17:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:26.229 10:17:58 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:26.229 10:17:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:26.794 10:17:58 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:26.794 INFO: shutting down applications... 00:07:26.794 10:17:58 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:26.794 10:17:58 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:26.794 10:17:58 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:26.794 10:17:58 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:28.165 Calling clear_iscsi_subsystem 00:07:28.165 Calling clear_nvmf_subsystem 00:07:28.165 Calling clear_nbd_subsystem 00:07:28.165 Calling clear_ublk_subsystem 00:07:28.165 Calling clear_vhost_blk_subsystem 00:07:28.165 Calling clear_vhost_scsi_subsystem 00:07:28.165 Calling clear_bdev_subsystem 00:07:28.165 10:18:00 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:28.165 10:18:00 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:28.422 10:18:00 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:28.422 10:18:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:28.422 10:18:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:28.422 10:18:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:28.681 10:18:01 json_config -- json_config/json_config.sh@352 -- # break 00:07:28.681 10:18:01 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:28.681 10:18:01 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:28.681 10:18:01 json_config -- json_config/common.sh@31 -- # local app=target 00:07:28.681 10:18:01 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:28.681 10:18:01 json_config -- json_config/common.sh@35 -- # [[ -n 2417705 ]] 00:07:28.681 10:18:01 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2417705 00:07:28.681 10:18:01 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:28.681 10:18:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:28.681 10:18:01 json_config -- json_config/common.sh@41 -- # kill -0 2417705 00:07:28.681 10:18:01 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:07:29.249 10:18:01 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:07:29.249 10:18:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:29.249 10:18:01 json_config -- json_config/common.sh@41 -- # kill -0 2417705 00:07:29.249 10:18:01 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:29.249 10:18:01 json_config -- json_config/common.sh@43 -- # break 00:07:29.249 10:18:01 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:29.249 10:18:01 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:29.249 SPDK target shutdown done 00:07:29.249 10:18:01 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:07:29.249 INFO: relaunching applications... 00:07:29.249 10:18:01 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:29.249 10:18:01 json_config -- json_config/common.sh@9 -- # local app=target 00:07:29.249 10:18:01 json_config -- json_config/common.sh@10 -- # shift 00:07:29.249 10:18:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:29.249 10:18:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:29.249 10:18:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:29.249 10:18:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:29.249 10:18:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:29.249 10:18:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2419001 00:07:29.249 10:18:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:29.249 10:18:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:29.249 Waiting for target to run... 00:07:29.249 10:18:01 json_config -- json_config/common.sh@25 -- # waitforlisten 2419001 /var/tmp/spdk_tgt.sock 00:07:29.249 10:18:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 2419001 ']' 00:07:29.249 10:18:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:29.249 10:18:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.249 10:18:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:29.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:29.249 10:18:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.249 10:18:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:29.249 [2024-12-09 10:18:01.574791] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:29.249 [2024-12-09 10:18:01.574867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2419001 ] 00:07:29.817 [2024-12-09 10:18:02.122718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.817 [2024-12-09 10:18:02.176742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.099 [2024-12-09 10:18:05.230394] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.099 [2024-12-09 10:18:05.262869] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:33.099 10:18:05 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.099 10:18:05 json_config -- common/autotest_common.sh@868 -- # return 0 00:07:33.099 10:18:05 json_config -- json_config/common.sh@26 -- # echo '' 00:07:33.099 00:07:33.099 10:18:05 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:07:33.099 10:18:05 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:07:33.099 INFO: Checking if target configuration is the same... 00:07:33.099 10:18:05 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:33.099 10:18:05 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:07:33.099 10:18:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:33.099 + '[' 2 -ne 2 ']' 00:07:33.099 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:33.099 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:33.099 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:33.099 +++ basename /dev/fd/62 00:07:33.099 ++ mktemp /tmp/62.XXX 00:07:33.099 + tmp_file_1=/tmp/62.SiJ 00:07:33.099 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:33.099 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:33.099 + tmp_file_2=/tmp/spdk_tgt_config.json.IVN 00:07:33.099 + ret=0 00:07:33.099 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:33.356 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:33.356 + diff -u /tmp/62.SiJ /tmp/spdk_tgt_config.json.IVN 00:07:33.356 + echo 'INFO: JSON config files are the same' 00:07:33.356 INFO: JSON config files are the same 00:07:33.356 + rm /tmp/62.SiJ /tmp/spdk_tgt_config.json.IVN 00:07:33.356 + exit 0 00:07:33.356 10:18:05 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:07:33.356 10:18:05 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:07:33.356 INFO: changing configuration and checking if this can be detected... 00:07:33.356 10:18:05 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:33.356 10:18:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:07:33.612 10:18:06 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:33.612 10:18:06 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:07:33.612 10:18:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:33.612 + '[' 2 -ne 2 ']' 00:07:33.612 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:07:33.612 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:07:33.612 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:33.612 +++ basename /dev/fd/62 00:07:33.612 ++ mktemp /tmp/62.XXX 00:07:33.612 + tmp_file_1=/tmp/62.zFh 00:07:33.612 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:33.612 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:07:33.612 + tmp_file_2=/tmp/spdk_tgt_config.json.5yX 00:07:33.612 + ret=0 00:07:33.612 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:34.197 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:07:34.197 + diff -u /tmp/62.zFh /tmp/spdk_tgt_config.json.5yX 00:07:34.197 + ret=1 00:07:34.197 + echo '=== Start of file: /tmp/62.zFh ===' 00:07:34.197 + cat /tmp/62.zFh 00:07:34.197 + echo '=== End of file: /tmp/62.zFh ===' 00:07:34.197 + echo '' 00:07:34.197 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5yX ===' 00:07:34.197 + cat /tmp/spdk_tgt_config.json.5yX 00:07:34.197 + echo '=== End of file: /tmp/spdk_tgt_config.json.5yX ===' 00:07:34.197 + echo '' 00:07:34.197 + rm /tmp/62.zFh /tmp/spdk_tgt_config.json.5yX 00:07:34.197 + exit 1 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:07:34.197 INFO: configuration change detected. 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@324 -- # [[ -n 2419001 ]] 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@200 -- # uname -s 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:34.197 10:18:06 json_config -- json_config/json_config.sh@330 -- # killprocess 2419001 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@954 -- # '[' -z 2419001 ']' 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@958 -- # kill -0 2419001 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@959 -- # uname 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2419001 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2419001' 00:07:34.197 killing process with pid 2419001 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@973 -- # kill 2419001 00:07:34.197 10:18:06 json_config -- common/autotest_common.sh@978 -- # wait 2419001 00:07:36.094 10:18:08 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:36.094 10:18:08 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:36.094 10:18:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.094 10:18:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.094 10:18:08 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:36.094 10:18:08 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:36.094 INFO: Success 00:07:36.094 00:07:36.094 real 0m16.490s 00:07:36.094 user 0m17.737s 00:07:36.094 sys 0m2.945s 00:07:36.094 10:18:08 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.094 10:18:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:36.094 ************************************ 00:07:36.094 END TEST json_config 00:07:36.094 ************************************ 00:07:36.094 10:18:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:36.094 10:18:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.094 10:18:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.094 10:18:08 -- common/autotest_common.sh@10 -- # set +x 00:07:36.094 ************************************ 00:07:36.094 START TEST json_config_extra_key 00:07:36.094 ************************************ 00:07:36.094 10:18:08 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:36.094 10:18:08 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:36.094 10:18:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:07:36.094 10:18:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:36.094 10:18:08 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:36.094 10:18:08 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.094 10:18:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:36.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.094 --rc genhtml_branch_coverage=1 00:07:36.094 --rc genhtml_function_coverage=1 00:07:36.094 --rc genhtml_legend=1 00:07:36.094 --rc geninfo_all_blocks=1 00:07:36.094 --rc geninfo_unexecuted_blocks=1 00:07:36.094 00:07:36.094 ' 00:07:36.094 10:18:08 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:36.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.094 --rc genhtml_branch_coverage=1 00:07:36.094 --rc genhtml_function_coverage=1 00:07:36.094 --rc genhtml_legend=1 00:07:36.094 --rc geninfo_all_blocks=1 00:07:36.094 --rc geninfo_unexecuted_blocks=1 00:07:36.094 00:07:36.094 ' 00:07:36.094 10:18:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:36.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.094 --rc genhtml_branch_coverage=1 00:07:36.094 --rc genhtml_function_coverage=1 00:07:36.094 --rc genhtml_legend=1 00:07:36.094 --rc geninfo_all_blocks=1 00:07:36.094 --rc geninfo_unexecuted_blocks=1 00:07:36.094 00:07:36.094 ' 00:07:36.094 10:18:08 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:36.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.094 --rc genhtml_branch_coverage=1 00:07:36.094 --rc genhtml_function_coverage=1 00:07:36.094 --rc genhtml_legend=1 00:07:36.094 --rc geninfo_all_blocks=1 00:07:36.094 --rc geninfo_unexecuted_blocks=1 00:07:36.094 00:07:36.094 ' 00:07:36.094 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:36.094 10:18:08 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:36.094 10:18:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:36.094 10:18:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.094 10:18:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.095 10:18:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.095 10:18:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:36.095 10:18:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:36.095 10:18:08 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:36.095 10:18:08 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:36.095 10:18:08 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:36.095 10:18:08 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:36.095 10:18:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:36.095 10:18:08 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:36.095 10:18:08 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:36.095 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:36.095 10:18:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:36.095 10:18:08 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:36.095 10:18:08 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:36.095 INFO: launching applications... 00:07:36.095 10:18:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:36.095 10:18:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:36.095 10:18:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:36.095 10:18:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:36.095 10:18:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:36.095 10:18:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:36.095 10:18:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:36.095 10:18:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:36.095 10:18:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2420442 00:07:36.095 10:18:08 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:36.095 10:18:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:36.095 Waiting for target to run... 00:07:36.095 10:18:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2420442 /var/tmp/spdk_tgt.sock 00:07:36.095 10:18:08 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2420442 ']' 00:07:36.095 10:18:08 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:36.095 10:18:08 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.095 10:18:08 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:36.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:36.095 10:18:08 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.095 10:18:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:36.095 [2024-12-09 10:18:08.476322] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:36.095 [2024-12-09 10:18:08.476423] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2420442 ] 00:07:36.663 [2024-12-09 10:18:09.011301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.663 [2024-12-09 10:18:09.066307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.231 10:18:09 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.231 10:18:09 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:37.231 10:18:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:37.231 00:07:37.231 10:18:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:37.231 INFO: shutting down applications... 00:07:37.231 10:18:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:37.231 10:18:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:37.231 10:18:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:37.231 10:18:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2420442 ]] 00:07:37.231 10:18:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2420442 00:07:37.231 10:18:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:37.231 10:18:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:37.231 10:18:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2420442 00:07:37.231 10:18:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:37.798 10:18:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:37.798 10:18:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:37.798 10:18:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2420442 00:07:37.798 10:18:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:38.057 10:18:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:38.057 10:18:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:38.057 10:18:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2420442 00:07:38.057 10:18:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:38.057 10:18:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:38.057 10:18:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:38.057 10:18:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:38.057 SPDK target shutdown done 00:07:38.057 10:18:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:38.057 Success 00:07:38.057 00:07:38.057 real 0m2.205s 00:07:38.057 user 0m1.547s 00:07:38.057 sys 0m0.660s 00:07:38.057 10:18:10 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.057 10:18:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:38.057 ************************************ 00:07:38.057 END TEST json_config_extra_key 00:07:38.057 ************************************ 00:07:38.057 10:18:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:38.057 10:18:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.057 10:18:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.057 10:18:10 -- common/autotest_common.sh@10 -- # set +x 00:07:38.316 ************************************ 00:07:38.316 START TEST alias_rpc 00:07:38.316 ************************************ 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:38.316 * Looking for test storage... 00:07:38.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.316 10:18:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.316 --rc genhtml_branch_coverage=1 00:07:38.316 --rc genhtml_function_coverage=1 00:07:38.316 --rc genhtml_legend=1 00:07:38.316 --rc geninfo_all_blocks=1 00:07:38.316 --rc geninfo_unexecuted_blocks=1 00:07:38.316 00:07:38.316 ' 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.316 --rc genhtml_branch_coverage=1 00:07:38.316 --rc genhtml_function_coverage=1 00:07:38.316 --rc genhtml_legend=1 00:07:38.316 --rc geninfo_all_blocks=1 00:07:38.316 --rc geninfo_unexecuted_blocks=1 00:07:38.316 00:07:38.316 ' 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.316 --rc genhtml_branch_coverage=1 00:07:38.316 --rc genhtml_function_coverage=1 00:07:38.316 --rc genhtml_legend=1 00:07:38.316 --rc geninfo_all_blocks=1 00:07:38.316 --rc geninfo_unexecuted_blocks=1 00:07:38.316 00:07:38.316 ' 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.316 --rc genhtml_branch_coverage=1 00:07:38.316 --rc genhtml_function_coverage=1 00:07:38.316 --rc genhtml_legend=1 00:07:38.316 --rc geninfo_all_blocks=1 00:07:38.316 --rc geninfo_unexecuted_blocks=1 00:07:38.316 00:07:38.316 ' 00:07:38.316 10:18:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:38.316 10:18:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2420886 00:07:38.316 10:18:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:38.316 10:18:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2420886 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2420886 ']' 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.316 10:18:10 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.317 10:18:10 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.317 10:18:10 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.317 10:18:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.317 [2024-12-09 10:18:10.717778] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:38.317 [2024-12-09 10:18:10.717860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2420886 ] 00:07:38.575 [2024-12-09 10:18:10.804536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.575 [2024-12-09 10:18:10.881275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.834 10:18:11 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.834 10:18:11 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:38.834 10:18:11 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:39.091 10:18:11 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2420886 00:07:39.091 10:18:11 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2420886 ']' 00:07:39.091 10:18:11 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2420886 00:07:39.091 10:18:11 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:39.091 10:18:11 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.091 10:18:11 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2420886 00:07:39.091 10:18:11 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.091 10:18:11 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.091 10:18:11 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2420886' 00:07:39.091 killing process with pid 2420886 00:07:39.091 10:18:11 alias_rpc -- common/autotest_common.sh@973 -- # kill 2420886 00:07:39.091 10:18:11 alias_rpc -- common/autotest_common.sh@978 -- # wait 2420886 00:07:39.657 00:07:39.657 real 0m1.459s 00:07:39.657 user 0m1.678s 00:07:39.657 sys 0m0.487s 00:07:39.657 10:18:11 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.657 10:18:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.657 ************************************ 00:07:39.657 END TEST alias_rpc 00:07:39.657 ************************************ 00:07:39.657 10:18:12 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:39.657 10:18:12 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:39.657 10:18:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.657 10:18:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.657 10:18:12 -- common/autotest_common.sh@10 -- # set +x 00:07:39.657 ************************************ 00:07:39.657 START TEST spdkcli_tcp 00:07:39.657 ************************************ 00:07:39.657 10:18:12 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:39.657 * Looking for test storage... 00:07:39.657 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:39.657 10:18:12 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:39.657 10:18:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:07:39.657 10:18:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.916 10:18:12 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:39.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.916 --rc genhtml_branch_coverage=1 00:07:39.916 --rc genhtml_function_coverage=1 00:07:39.916 --rc genhtml_legend=1 00:07:39.916 --rc geninfo_all_blocks=1 00:07:39.916 --rc geninfo_unexecuted_blocks=1 00:07:39.916 00:07:39.916 ' 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:39.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.916 --rc genhtml_branch_coverage=1 00:07:39.916 --rc genhtml_function_coverage=1 00:07:39.916 --rc genhtml_legend=1 00:07:39.916 --rc geninfo_all_blocks=1 00:07:39.916 --rc geninfo_unexecuted_blocks=1 00:07:39.916 00:07:39.916 ' 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:39.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.916 --rc genhtml_branch_coverage=1 00:07:39.916 --rc genhtml_function_coverage=1 00:07:39.916 --rc genhtml_legend=1 00:07:39.916 --rc geninfo_all_blocks=1 00:07:39.916 --rc geninfo_unexecuted_blocks=1 00:07:39.916 00:07:39.916 ' 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:39.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.916 --rc genhtml_branch_coverage=1 00:07:39.916 --rc genhtml_function_coverage=1 00:07:39.916 --rc genhtml_legend=1 00:07:39.916 --rc geninfo_all_blocks=1 00:07:39.916 --rc geninfo_unexecuted_blocks=1 00:07:39.916 00:07:39.916 ' 00:07:39.916 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:39.916 10:18:12 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:39.916 10:18:12 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:39.916 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:39.916 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:39.916 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:39.916 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.916 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2421091 00:07:39.916 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:39.916 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2421091 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2421091 ']' 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.916 10:18:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.916 [2024-12-09 10:18:12.236715] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:39.916 [2024-12-09 10:18:12.236803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421091 ] 00:07:39.916 [2024-12-09 10:18:12.301191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.175 [2024-12-09 10:18:12.363548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.175 [2024-12-09 10:18:12.363553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.433 10:18:12 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.433 10:18:12 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:40.433 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2421100 00:07:40.433 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:40.433 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:40.691 [ 00:07:40.691 "bdev_malloc_delete", 00:07:40.691 "bdev_malloc_create", 00:07:40.691 "bdev_null_resize", 00:07:40.691 "bdev_null_delete", 00:07:40.691 "bdev_null_create", 00:07:40.691 "bdev_nvme_cuse_unregister", 00:07:40.691 "bdev_nvme_cuse_register", 00:07:40.691 "bdev_opal_new_user", 00:07:40.691 "bdev_opal_set_lock_state", 00:07:40.691 "bdev_opal_delete", 00:07:40.691 "bdev_opal_get_info", 00:07:40.691 "bdev_opal_create", 00:07:40.691 "bdev_nvme_opal_revert", 00:07:40.691 "bdev_nvme_opal_init", 00:07:40.691 "bdev_nvme_send_cmd", 00:07:40.691 "bdev_nvme_set_keys", 00:07:40.691 "bdev_nvme_get_path_iostat", 00:07:40.691 "bdev_nvme_get_mdns_discovery_info", 00:07:40.691 "bdev_nvme_stop_mdns_discovery", 00:07:40.691 "bdev_nvme_start_mdns_discovery", 00:07:40.691 "bdev_nvme_set_multipath_policy", 00:07:40.691 "bdev_nvme_set_preferred_path", 00:07:40.691 "bdev_nvme_get_io_paths", 00:07:40.691 "bdev_nvme_remove_error_injection", 00:07:40.691 "bdev_nvme_add_error_injection", 00:07:40.691 "bdev_nvme_get_discovery_info", 00:07:40.691 "bdev_nvme_stop_discovery", 00:07:40.691 "bdev_nvme_start_discovery", 00:07:40.691 "bdev_nvme_get_controller_health_info", 00:07:40.691 "bdev_nvme_disable_controller", 00:07:40.691 "bdev_nvme_enable_controller", 00:07:40.691 "bdev_nvme_reset_controller", 00:07:40.691 "bdev_nvme_get_transport_statistics", 00:07:40.691 "bdev_nvme_apply_firmware", 00:07:40.691 "bdev_nvme_detach_controller", 00:07:40.691 "bdev_nvme_get_controllers", 00:07:40.691 "bdev_nvme_attach_controller", 00:07:40.691 "bdev_nvme_set_hotplug", 00:07:40.691 "bdev_nvme_set_options", 00:07:40.691 "bdev_passthru_delete", 00:07:40.691 "bdev_passthru_create", 00:07:40.691 "bdev_lvol_set_parent_bdev", 00:07:40.691 "bdev_lvol_set_parent", 00:07:40.691 "bdev_lvol_check_shallow_copy", 00:07:40.691 "bdev_lvol_start_shallow_copy", 00:07:40.691 "bdev_lvol_grow_lvstore", 00:07:40.691 "bdev_lvol_get_lvols", 00:07:40.691 "bdev_lvol_get_lvstores", 00:07:40.691 "bdev_lvol_delete", 00:07:40.691 "bdev_lvol_set_read_only", 00:07:40.691 "bdev_lvol_resize", 00:07:40.692 "bdev_lvol_decouple_parent", 00:07:40.692 "bdev_lvol_inflate", 00:07:40.692 "bdev_lvol_rename", 00:07:40.692 "bdev_lvol_clone_bdev", 00:07:40.692 "bdev_lvol_clone", 00:07:40.692 "bdev_lvol_snapshot", 00:07:40.692 "bdev_lvol_create", 00:07:40.692 "bdev_lvol_delete_lvstore", 00:07:40.692 "bdev_lvol_rename_lvstore", 00:07:40.692 "bdev_lvol_create_lvstore", 00:07:40.692 "bdev_raid_set_options", 00:07:40.692 "bdev_raid_remove_base_bdev", 00:07:40.692 "bdev_raid_add_base_bdev", 00:07:40.692 "bdev_raid_delete", 00:07:40.692 "bdev_raid_create", 00:07:40.692 "bdev_raid_get_bdevs", 00:07:40.692 "bdev_error_inject_error", 00:07:40.692 "bdev_error_delete", 00:07:40.692 "bdev_error_create", 00:07:40.692 "bdev_split_delete", 00:07:40.692 "bdev_split_create", 00:07:40.692 "bdev_delay_delete", 00:07:40.692 "bdev_delay_create", 00:07:40.692 "bdev_delay_update_latency", 00:07:40.692 "bdev_zone_block_delete", 00:07:40.692 "bdev_zone_block_create", 00:07:40.692 "blobfs_create", 00:07:40.692 "blobfs_detect", 00:07:40.692 "blobfs_set_cache_size", 00:07:40.692 "bdev_aio_delete", 00:07:40.692 "bdev_aio_rescan", 00:07:40.692 "bdev_aio_create", 00:07:40.692 "bdev_ftl_set_property", 00:07:40.692 "bdev_ftl_get_properties", 00:07:40.692 "bdev_ftl_get_stats", 00:07:40.692 "bdev_ftl_unmap", 00:07:40.692 "bdev_ftl_unload", 00:07:40.692 "bdev_ftl_delete", 00:07:40.692 "bdev_ftl_load", 00:07:40.692 "bdev_ftl_create", 00:07:40.692 "bdev_virtio_attach_controller", 00:07:40.692 "bdev_virtio_scsi_get_devices", 00:07:40.692 "bdev_virtio_detach_controller", 00:07:40.692 "bdev_virtio_blk_set_hotplug", 00:07:40.692 "bdev_iscsi_delete", 00:07:40.692 "bdev_iscsi_create", 00:07:40.692 "bdev_iscsi_set_options", 00:07:40.692 "accel_error_inject_error", 00:07:40.692 "ioat_scan_accel_module", 00:07:40.692 "dsa_scan_accel_module", 00:07:40.692 "iaa_scan_accel_module", 00:07:40.692 "vfu_virtio_create_fs_endpoint", 00:07:40.692 "vfu_virtio_create_scsi_endpoint", 00:07:40.692 "vfu_virtio_scsi_remove_target", 00:07:40.692 "vfu_virtio_scsi_add_target", 00:07:40.692 "vfu_virtio_create_blk_endpoint", 00:07:40.692 "vfu_virtio_delete_endpoint", 00:07:40.692 "keyring_file_remove_key", 00:07:40.692 "keyring_file_add_key", 00:07:40.692 "keyring_linux_set_options", 00:07:40.692 "fsdev_aio_delete", 00:07:40.692 "fsdev_aio_create", 00:07:40.692 "iscsi_get_histogram", 00:07:40.692 "iscsi_enable_histogram", 00:07:40.692 "iscsi_set_options", 00:07:40.692 "iscsi_get_auth_groups", 00:07:40.692 "iscsi_auth_group_remove_secret", 00:07:40.692 "iscsi_auth_group_add_secret", 00:07:40.692 "iscsi_delete_auth_group", 00:07:40.692 "iscsi_create_auth_group", 00:07:40.692 "iscsi_set_discovery_auth", 00:07:40.692 "iscsi_get_options", 00:07:40.692 "iscsi_target_node_request_logout", 00:07:40.692 "iscsi_target_node_set_redirect", 00:07:40.692 "iscsi_target_node_set_auth", 00:07:40.692 "iscsi_target_node_add_lun", 00:07:40.692 "iscsi_get_stats", 00:07:40.692 "iscsi_get_connections", 00:07:40.692 "iscsi_portal_group_set_auth", 00:07:40.692 "iscsi_start_portal_group", 00:07:40.692 "iscsi_delete_portal_group", 00:07:40.692 "iscsi_create_portal_group", 00:07:40.692 "iscsi_get_portal_groups", 00:07:40.692 "iscsi_delete_target_node", 00:07:40.692 "iscsi_target_node_remove_pg_ig_maps", 00:07:40.692 "iscsi_target_node_add_pg_ig_maps", 00:07:40.692 "iscsi_create_target_node", 00:07:40.692 "iscsi_get_target_nodes", 00:07:40.692 "iscsi_delete_initiator_group", 00:07:40.692 "iscsi_initiator_group_remove_initiators", 00:07:40.692 "iscsi_initiator_group_add_initiators", 00:07:40.692 "iscsi_create_initiator_group", 00:07:40.692 "iscsi_get_initiator_groups", 00:07:40.692 "nvmf_set_crdt", 00:07:40.692 "nvmf_set_config", 00:07:40.692 "nvmf_set_max_subsystems", 00:07:40.692 "nvmf_stop_mdns_prr", 00:07:40.692 "nvmf_publish_mdns_prr", 00:07:40.692 "nvmf_subsystem_get_listeners", 00:07:40.692 "nvmf_subsystem_get_qpairs", 00:07:40.692 "nvmf_subsystem_get_controllers", 00:07:40.692 "nvmf_get_stats", 00:07:40.692 "nvmf_get_transports", 00:07:40.692 "nvmf_create_transport", 00:07:40.692 "nvmf_get_targets", 00:07:40.692 "nvmf_delete_target", 00:07:40.692 "nvmf_create_target", 00:07:40.692 "nvmf_subsystem_allow_any_host", 00:07:40.692 "nvmf_subsystem_set_keys", 00:07:40.692 "nvmf_subsystem_remove_host", 00:07:40.692 "nvmf_subsystem_add_host", 00:07:40.692 "nvmf_ns_remove_host", 00:07:40.692 "nvmf_ns_add_host", 00:07:40.692 "nvmf_subsystem_remove_ns", 00:07:40.692 "nvmf_subsystem_set_ns_ana_group", 00:07:40.692 "nvmf_subsystem_add_ns", 00:07:40.692 "nvmf_subsystem_listener_set_ana_state", 00:07:40.692 "nvmf_discovery_get_referrals", 00:07:40.692 "nvmf_discovery_remove_referral", 00:07:40.692 "nvmf_discovery_add_referral", 00:07:40.692 "nvmf_subsystem_remove_listener", 00:07:40.692 "nvmf_subsystem_add_listener", 00:07:40.692 "nvmf_delete_subsystem", 00:07:40.692 "nvmf_create_subsystem", 00:07:40.692 "nvmf_get_subsystems", 00:07:40.692 "env_dpdk_get_mem_stats", 00:07:40.692 "nbd_get_disks", 00:07:40.692 "nbd_stop_disk", 00:07:40.692 "nbd_start_disk", 00:07:40.692 "ublk_recover_disk", 00:07:40.692 "ublk_get_disks", 00:07:40.692 "ublk_stop_disk", 00:07:40.692 "ublk_start_disk", 00:07:40.692 "ublk_destroy_target", 00:07:40.692 "ublk_create_target", 00:07:40.692 "virtio_blk_create_transport", 00:07:40.692 "virtio_blk_get_transports", 00:07:40.692 "vhost_controller_set_coalescing", 00:07:40.692 "vhost_get_controllers", 00:07:40.692 "vhost_delete_controller", 00:07:40.692 "vhost_create_blk_controller", 00:07:40.692 "vhost_scsi_controller_remove_target", 00:07:40.692 "vhost_scsi_controller_add_target", 00:07:40.692 "vhost_start_scsi_controller", 00:07:40.692 "vhost_create_scsi_controller", 00:07:40.692 "thread_set_cpumask", 00:07:40.692 "scheduler_set_options", 00:07:40.692 "framework_get_governor", 00:07:40.692 "framework_get_scheduler", 00:07:40.692 "framework_set_scheduler", 00:07:40.692 "framework_get_reactors", 00:07:40.692 "thread_get_io_channels", 00:07:40.692 "thread_get_pollers", 00:07:40.692 "thread_get_stats", 00:07:40.692 "framework_monitor_context_switch", 00:07:40.692 "spdk_kill_instance", 00:07:40.692 "log_enable_timestamps", 00:07:40.692 "log_get_flags", 00:07:40.692 "log_clear_flag", 00:07:40.692 "log_set_flag", 00:07:40.692 "log_get_level", 00:07:40.692 "log_set_level", 00:07:40.692 "log_get_print_level", 00:07:40.692 "log_set_print_level", 00:07:40.692 "framework_enable_cpumask_locks", 00:07:40.692 "framework_disable_cpumask_locks", 00:07:40.692 "framework_wait_init", 00:07:40.692 "framework_start_init", 00:07:40.692 "scsi_get_devices", 00:07:40.692 "bdev_get_histogram", 00:07:40.692 "bdev_enable_histogram", 00:07:40.692 "bdev_set_qos_limit", 00:07:40.692 "bdev_set_qd_sampling_period", 00:07:40.692 "bdev_get_bdevs", 00:07:40.692 "bdev_reset_iostat", 00:07:40.692 "bdev_get_iostat", 00:07:40.692 "bdev_examine", 00:07:40.692 "bdev_wait_for_examine", 00:07:40.692 "bdev_set_options", 00:07:40.692 "accel_get_stats", 00:07:40.692 "accel_set_options", 00:07:40.692 "accel_set_driver", 00:07:40.692 "accel_crypto_key_destroy", 00:07:40.692 "accel_crypto_keys_get", 00:07:40.692 "accel_crypto_key_create", 00:07:40.692 "accel_assign_opc", 00:07:40.692 "accel_get_module_info", 00:07:40.692 "accel_get_opc_assignments", 00:07:40.692 "vmd_rescan", 00:07:40.692 "vmd_remove_device", 00:07:40.692 "vmd_enable", 00:07:40.692 "sock_get_default_impl", 00:07:40.692 "sock_set_default_impl", 00:07:40.692 "sock_impl_set_options", 00:07:40.692 "sock_impl_get_options", 00:07:40.692 "iobuf_get_stats", 00:07:40.692 "iobuf_set_options", 00:07:40.692 "keyring_get_keys", 00:07:40.692 "vfu_tgt_set_base_path", 00:07:40.692 "framework_get_pci_devices", 00:07:40.692 "framework_get_config", 00:07:40.692 "framework_get_subsystems", 00:07:40.692 "fsdev_set_opts", 00:07:40.692 "fsdev_get_opts", 00:07:40.692 "trace_get_info", 00:07:40.692 "trace_get_tpoint_group_mask", 00:07:40.692 "trace_disable_tpoint_group", 00:07:40.692 "trace_enable_tpoint_group", 00:07:40.692 "trace_clear_tpoint_mask", 00:07:40.692 "trace_set_tpoint_mask", 00:07:40.692 "notify_get_notifications", 00:07:40.692 "notify_get_types", 00:07:40.692 "spdk_get_version", 00:07:40.692 "rpc_get_methods" 00:07:40.692 ] 00:07:40.692 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:40.692 10:18:12 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:40.692 10:18:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:40.692 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:40.692 10:18:12 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2421091 00:07:40.692 10:18:12 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2421091 ']' 00:07:40.692 10:18:12 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2421091 00:07:40.692 10:18:12 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:40.692 10:18:12 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.692 10:18:12 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2421091 00:07:40.692 10:18:12 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.692 10:18:12 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.693 10:18:12 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2421091' 00:07:40.693 killing process with pid 2421091 00:07:40.693 10:18:12 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2421091 00:07:40.693 10:18:12 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2421091 00:07:41.258 00:07:41.258 real 0m1.395s 00:07:41.258 user 0m2.485s 00:07:41.258 sys 0m0.467s 00:07:41.258 10:18:13 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.258 10:18:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:41.258 ************************************ 00:07:41.258 END TEST spdkcli_tcp 00:07:41.258 ************************************ 00:07:41.258 10:18:13 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:41.258 10:18:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.258 10:18:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.258 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:07:41.258 ************************************ 00:07:41.258 START TEST dpdk_mem_utility 00:07:41.258 ************************************ 00:07:41.258 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:41.258 * Looking for test storage... 00:07:41.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:41.258 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:41.258 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.259 10:18:13 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:41.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.259 --rc genhtml_branch_coverage=1 00:07:41.259 --rc genhtml_function_coverage=1 00:07:41.259 --rc genhtml_legend=1 00:07:41.259 --rc geninfo_all_blocks=1 00:07:41.259 --rc geninfo_unexecuted_blocks=1 00:07:41.259 00:07:41.259 ' 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:41.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.259 --rc genhtml_branch_coverage=1 00:07:41.259 --rc genhtml_function_coverage=1 00:07:41.259 --rc genhtml_legend=1 00:07:41.259 --rc geninfo_all_blocks=1 00:07:41.259 --rc geninfo_unexecuted_blocks=1 00:07:41.259 00:07:41.259 ' 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:41.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.259 --rc genhtml_branch_coverage=1 00:07:41.259 --rc genhtml_function_coverage=1 00:07:41.259 --rc genhtml_legend=1 00:07:41.259 --rc geninfo_all_blocks=1 00:07:41.259 --rc geninfo_unexecuted_blocks=1 00:07:41.259 00:07:41.259 ' 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:41.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.259 --rc genhtml_branch_coverage=1 00:07:41.259 --rc genhtml_function_coverage=1 00:07:41.259 --rc genhtml_legend=1 00:07:41.259 --rc geninfo_all_blocks=1 00:07:41.259 --rc geninfo_unexecuted_blocks=1 00:07:41.259 00:07:41.259 ' 00:07:41.259 10:18:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:41.259 10:18:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2421306 00:07:41.259 10:18:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:41.259 10:18:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2421306 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2421306 ']' 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.259 10:18:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:41.259 [2024-12-09 10:18:13.680032] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:41.259 [2024-12-09 10:18:13.680128] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421306 ] 00:07:41.516 [2024-12-09 10:18:13.745214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.516 [2024-12-09 10:18:13.804304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.774 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.774 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:41.774 10:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:41.774 10:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:41.774 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.774 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:41.774 { 00:07:41.774 "filename": "/tmp/spdk_mem_dump.txt" 00:07:41.774 } 00:07:41.774 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.774 10:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:41.774 DPDK memory size 818.000000 MiB in 1 heap(s) 00:07:41.774 1 heaps totaling size 818.000000 MiB 00:07:41.774 size: 818.000000 MiB heap id: 0 00:07:41.774 end heaps---------- 00:07:41.774 9 mempools totaling size 603.782043 MiB 00:07:41.774 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:41.774 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:41.774 size: 100.555481 MiB name: bdev_io_2421306 00:07:41.774 size: 50.003479 MiB name: msgpool_2421306 00:07:41.774 size: 36.509338 MiB name: fsdev_io_2421306 00:07:41.774 size: 21.763794 MiB name: PDU_Pool 00:07:41.774 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:41.774 size: 4.133484 MiB name: evtpool_2421306 00:07:41.774 size: 0.026123 MiB name: Session_Pool 00:07:41.774 end mempools------- 00:07:41.774 6 memzones totaling size 4.142822 MiB 00:07:41.774 size: 1.000366 MiB name: RG_ring_0_2421306 00:07:41.774 size: 1.000366 MiB name: RG_ring_1_2421306 00:07:41.774 size: 1.000366 MiB name: RG_ring_4_2421306 00:07:41.774 size: 1.000366 MiB name: RG_ring_5_2421306 00:07:41.774 size: 0.125366 MiB name: RG_ring_2_2421306 00:07:41.774 size: 0.015991 MiB name: RG_ring_3_2421306 00:07:41.774 end memzones------- 00:07:41.774 10:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:41.774 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:41.774 list of free elements. size: 10.852478 MiB 00:07:41.774 element at address: 0x200019200000 with size: 0.999878 MiB 00:07:41.774 element at address: 0x200019400000 with size: 0.999878 MiB 00:07:41.774 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:41.774 element at address: 0x200032000000 with size: 0.994446 MiB 00:07:41.774 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:41.774 element at address: 0x200012c00000 with size: 0.944275 MiB 00:07:41.774 element at address: 0x200019600000 with size: 0.936584 MiB 00:07:41.774 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:41.774 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:07:41.774 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:41.774 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:41.774 element at address: 0x200019800000 with size: 0.485657 MiB 00:07:41.774 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:41.774 element at address: 0x200028200000 with size: 0.410034 MiB 00:07:41.774 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:41.774 list of standard malloc elements. size: 199.218628 MiB 00:07:41.774 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:41.774 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:41.774 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:07:41.774 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:07:41.774 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:07:41.774 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:41.774 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:07:41.774 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:41.774 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:07:41.774 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:41.774 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:41.774 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:41.774 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:41.774 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:41.774 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:41.774 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:41.774 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:41.775 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:41.775 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:41.775 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:41.775 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:41.775 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:41.775 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:41.775 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:41.775 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:41.775 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:41.775 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:07:41.775 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:07:41.775 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:07:41.775 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:07:41.775 element at address: 0x200028268f80 with size: 0.000183 MiB 00:07:41.775 element at address: 0x200028269040 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:07:41.775 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:07:41.775 list of memzone associated elements. size: 607.928894 MiB 00:07:41.775 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:07:41.775 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:41.775 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:07:41.775 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:41.775 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:07:41.775 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2421306_0 00:07:41.775 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:41.775 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2421306_0 00:07:41.775 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:41.775 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2421306_0 00:07:41.775 element at address: 0x2000199be940 with size: 20.255554 MiB 00:07:41.775 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:41.775 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:07:41.775 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:41.775 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:41.775 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2421306_0 00:07:41.775 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:41.775 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2421306 00:07:41.775 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:41.775 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2421306 00:07:41.775 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:41.775 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:41.775 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:07:41.775 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:41.775 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:41.775 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:41.775 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:41.775 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:41.775 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:41.775 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2421306 00:07:41.775 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:41.775 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2421306 00:07:41.775 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:07:41.775 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2421306 00:07:41.775 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:07:41.775 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2421306 00:07:41.775 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:41.775 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2421306 00:07:41.775 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:41.775 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2421306 00:07:41.775 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:41.775 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:41.775 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:41.775 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:41.775 element at address: 0x20001987c540 with size: 0.250488 MiB 00:07:41.775 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:41.775 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:41.775 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2421306 00:07:41.775 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:41.775 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2421306 00:07:41.775 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:41.775 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:41.775 element at address: 0x200028269100 with size: 0.023743 MiB 00:07:41.775 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:41.775 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:41.775 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2421306 00:07:41.775 element at address: 0x20002826f240 with size: 0.002441 MiB 00:07:41.775 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:41.775 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:41.775 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2421306 00:07:41.775 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:41.775 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2421306 00:07:41.775 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:41.775 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2421306 00:07:41.775 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:07:41.775 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:41.775 10:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:41.775 10:18:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2421306 00:07:41.775 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2421306 ']' 00:07:41.775 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2421306 00:07:41.775 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:41.775 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.775 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2421306 00:07:41.775 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.775 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.775 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2421306' 00:07:41.775 killing process with pid 2421306 00:07:41.775 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2421306 00:07:41.775 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2421306 00:07:42.339 00:07:42.339 real 0m1.186s 00:07:42.339 user 0m1.145s 00:07:42.339 sys 0m0.434s 00:07:42.339 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.339 10:18:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:42.339 ************************************ 00:07:42.339 END TEST dpdk_mem_utility 00:07:42.339 ************************************ 00:07:42.339 10:18:14 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:42.339 10:18:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.339 10:18:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.339 10:18:14 -- common/autotest_common.sh@10 -- # set +x 00:07:42.339 ************************************ 00:07:42.339 START TEST event 00:07:42.339 ************************************ 00:07:42.339 10:18:14 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:42.339 * Looking for test storage... 00:07:42.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:42.340 10:18:14 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:42.340 10:18:14 event -- common/autotest_common.sh@1711 -- # lcov --version 00:07:42.340 10:18:14 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:42.597 10:18:14 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:42.597 10:18:14 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.597 10:18:14 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.597 10:18:14 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.597 10:18:14 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.597 10:18:14 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.597 10:18:14 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.597 10:18:14 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.597 10:18:14 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.597 10:18:14 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.597 10:18:14 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.597 10:18:14 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.597 10:18:14 event -- scripts/common.sh@344 -- # case "$op" in 00:07:42.597 10:18:14 event -- scripts/common.sh@345 -- # : 1 00:07:42.597 10:18:14 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.597 10:18:14 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.597 10:18:14 event -- scripts/common.sh@365 -- # decimal 1 00:07:42.597 10:18:14 event -- scripts/common.sh@353 -- # local d=1 00:07:42.597 10:18:14 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.597 10:18:14 event -- scripts/common.sh@355 -- # echo 1 00:07:42.597 10:18:14 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.597 10:18:14 event -- scripts/common.sh@366 -- # decimal 2 00:07:42.597 10:18:14 event -- scripts/common.sh@353 -- # local d=2 00:07:42.597 10:18:14 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.597 10:18:14 event -- scripts/common.sh@355 -- # echo 2 00:07:42.597 10:18:14 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.597 10:18:14 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.597 10:18:14 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.597 10:18:14 event -- scripts/common.sh@368 -- # return 0 00:07:42.597 10:18:14 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.597 10:18:14 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:42.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.597 --rc genhtml_branch_coverage=1 00:07:42.597 --rc genhtml_function_coverage=1 00:07:42.597 --rc genhtml_legend=1 00:07:42.597 --rc geninfo_all_blocks=1 00:07:42.597 --rc geninfo_unexecuted_blocks=1 00:07:42.597 00:07:42.597 ' 00:07:42.597 10:18:14 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:42.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.597 --rc genhtml_branch_coverage=1 00:07:42.597 --rc genhtml_function_coverage=1 00:07:42.597 --rc genhtml_legend=1 00:07:42.597 --rc geninfo_all_blocks=1 00:07:42.597 --rc geninfo_unexecuted_blocks=1 00:07:42.597 00:07:42.597 ' 00:07:42.597 10:18:14 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:42.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.597 --rc genhtml_branch_coverage=1 00:07:42.597 --rc genhtml_function_coverage=1 00:07:42.597 --rc genhtml_legend=1 00:07:42.597 --rc geninfo_all_blocks=1 00:07:42.597 --rc geninfo_unexecuted_blocks=1 00:07:42.597 00:07:42.597 ' 00:07:42.597 10:18:14 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:42.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.597 --rc genhtml_branch_coverage=1 00:07:42.597 --rc genhtml_function_coverage=1 00:07:42.597 --rc genhtml_legend=1 00:07:42.597 --rc geninfo_all_blocks=1 00:07:42.597 --rc geninfo_unexecuted_blocks=1 00:07:42.597 00:07:42.597 ' 00:07:42.597 10:18:14 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:42.597 10:18:14 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:42.597 10:18:14 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:42.597 10:18:14 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:42.597 10:18:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.597 10:18:14 event -- common/autotest_common.sh@10 -- # set +x 00:07:42.597 ************************************ 00:07:42.597 START TEST event_perf 00:07:42.597 ************************************ 00:07:42.597 10:18:14 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:42.597 Running I/O for 1 seconds...[2024-12-09 10:18:14.907468] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:42.597 [2024-12-09 10:18:14.907536] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421508 ] 00:07:42.597 [2024-12-09 10:18:14.976702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.854 [2024-12-09 10:18:15.041737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.854 [2024-12-09 10:18:15.041793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.854 [2024-12-09 10:18:15.041868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.854 [2024-12-09 10:18:15.041871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.783 Running I/O for 1 seconds... 00:07:43.783 lcore 0: 235378 00:07:43.783 lcore 1: 235378 00:07:43.783 lcore 2: 235379 00:07:43.783 lcore 3: 235379 00:07:43.783 done. 00:07:43.783 00:07:43.783 real 0m1.250s 00:07:43.783 user 0m4.166s 00:07:43.783 sys 0m0.077s 00:07:43.783 10:18:16 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.783 10:18:16 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.783 ************************************ 00:07:43.783 END TEST event_perf 00:07:43.783 ************************************ 00:07:43.783 10:18:16 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:43.783 10:18:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:43.783 10:18:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.783 10:18:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:43.783 ************************************ 00:07:43.783 START TEST event_reactor 00:07:43.783 ************************************ 00:07:43.783 10:18:16 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:43.783 [2024-12-09 10:18:16.212131] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:43.783 [2024-12-09 10:18:16.212220] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421669 ] 00:07:44.040 [2024-12-09 10:18:16.282103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.040 [2024-12-09 10:18:16.339125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.425 test_start 00:07:45.425 oneshot 00:07:45.425 tick 100 00:07:45.425 tick 100 00:07:45.425 tick 250 00:07:45.425 tick 100 00:07:45.425 tick 100 00:07:45.425 tick 100 00:07:45.425 tick 250 00:07:45.425 tick 500 00:07:45.425 tick 100 00:07:45.425 tick 100 00:07:45.425 tick 250 00:07:45.425 tick 100 00:07:45.425 tick 100 00:07:45.425 test_end 00:07:45.425 00:07:45.425 real 0m1.241s 00:07:45.425 user 0m1.168s 00:07:45.425 sys 0m0.068s 00:07:45.425 10:18:17 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.425 10:18:17 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:45.425 ************************************ 00:07:45.425 END TEST event_reactor 00:07:45.425 ************************************ 00:07:45.425 10:18:17 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:45.425 10:18:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:45.425 10:18:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.425 10:18:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.425 ************************************ 00:07:45.425 START TEST event_reactor_perf 00:07:45.425 ************************************ 00:07:45.425 10:18:17 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:45.425 [2024-12-09 10:18:17.501804] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:45.425 [2024-12-09 10:18:17.501869] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2421932 ] 00:07:45.425 [2024-12-09 10:18:17.567493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.425 [2024-12-09 10:18:17.622121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.358 test_start 00:07:46.358 test_end 00:07:46.358 Performance: 444455 events per second 00:07:46.358 00:07:46.358 real 0m1.232s 00:07:46.358 user 0m1.161s 00:07:46.358 sys 0m0.066s 00:07:46.358 10:18:18 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.358 10:18:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:46.358 ************************************ 00:07:46.358 END TEST event_reactor_perf 00:07:46.358 ************************************ 00:07:46.358 10:18:18 event -- event/event.sh@49 -- # uname -s 00:07:46.358 10:18:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:46.358 10:18:18 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:46.358 10:18:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.358 10:18:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.358 10:18:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:46.358 ************************************ 00:07:46.358 START TEST event_scheduler 00:07:46.358 ************************************ 00:07:46.358 10:18:18 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:46.616 * Looking for test storage... 00:07:46.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.616 10:18:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.616 --rc genhtml_branch_coverage=1 00:07:46.616 --rc genhtml_function_coverage=1 00:07:46.616 --rc genhtml_legend=1 00:07:46.616 --rc geninfo_all_blocks=1 00:07:46.616 --rc geninfo_unexecuted_blocks=1 00:07:46.616 00:07:46.616 ' 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.616 --rc genhtml_branch_coverage=1 00:07:46.616 --rc genhtml_function_coverage=1 00:07:46.616 --rc genhtml_legend=1 00:07:46.616 --rc geninfo_all_blocks=1 00:07:46.616 --rc geninfo_unexecuted_blocks=1 00:07:46.616 00:07:46.616 ' 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.616 --rc genhtml_branch_coverage=1 00:07:46.616 --rc genhtml_function_coverage=1 00:07:46.616 --rc genhtml_legend=1 00:07:46.616 --rc geninfo_all_blocks=1 00:07:46.616 --rc geninfo_unexecuted_blocks=1 00:07:46.616 00:07:46.616 ' 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.616 --rc genhtml_branch_coverage=1 00:07:46.616 --rc genhtml_function_coverage=1 00:07:46.616 --rc genhtml_legend=1 00:07:46.616 --rc geninfo_all_blocks=1 00:07:46.616 --rc geninfo_unexecuted_blocks=1 00:07:46.616 00:07:46.616 ' 00:07:46.616 10:18:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:46.616 10:18:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2422128 00:07:46.616 10:18:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:46.616 10:18:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:46.616 10:18:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2422128 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2422128 ']' 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.616 10:18:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:46.616 [2024-12-09 10:18:18.970570] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:46.616 [2024-12-09 10:18:18.970650] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422128 ] 00:07:46.616 [2024-12-09 10:18:19.039919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.874 [2024-12-09 10:18:19.102003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.874 [2024-12-09 10:18:19.102066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.874 [2024-12-09 10:18:19.102132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.874 [2024-12-09 10:18:19.102136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.874 10:18:19 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.874 10:18:19 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:46.874 10:18:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:46.874 10:18:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.874 10:18:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:46.874 [2024-12-09 10:18:19.203010] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:46.874 [2024-12-09 10:18:19.203036] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:46.874 [2024-12-09 10:18:19.203069] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:46.874 [2024-12-09 10:18:19.203080] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:46.874 [2024-12-09 10:18:19.203090] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:46.874 10:18:19 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.874 10:18:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:46.874 10:18:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.874 10:18:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:46.874 [2024-12-09 10:18:19.307324] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:46.874 10:18:19 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.874 10:18:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:46.874 10:18:19 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.874 10:18:19 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.874 10:18:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 ************************************ 00:07:47.132 START TEST scheduler_create_thread 00:07:47.132 ************************************ 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 2 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 3 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 4 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 5 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 6 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 7 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 8 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 9 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 10 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.132 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.697 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.697 00:07:47.697 real 0m0.590s 00:07:47.697 user 0m0.012s 00:07:47.697 sys 0m0.003s 00:07:47.697 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.697 10:18:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.697 ************************************ 00:07:47.697 END TEST scheduler_create_thread 00:07:47.697 ************************************ 00:07:47.697 10:18:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:47.697 10:18:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2422128 00:07:47.697 10:18:19 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2422128 ']' 00:07:47.697 10:18:19 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2422128 00:07:47.697 10:18:19 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:47.697 10:18:19 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.697 10:18:19 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2422128 00:07:47.697 10:18:19 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:47.697 10:18:19 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:47.697 10:18:19 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2422128' 00:07:47.697 killing process with pid 2422128 00:07:47.697 10:18:19 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2422128 00:07:47.697 10:18:19 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2422128 00:07:48.263 [2024-12-09 10:18:20.407572] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:48.263 00:07:48.263 real 0m1.895s 00:07:48.263 user 0m2.542s 00:07:48.263 sys 0m0.369s 00:07:48.263 10:18:20 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.263 10:18:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:48.263 ************************************ 00:07:48.263 END TEST event_scheduler 00:07:48.263 ************************************ 00:07:48.263 10:18:20 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:48.263 10:18:20 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:48.263 10:18:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.263 10:18:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.263 10:18:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:48.521 ************************************ 00:07:48.521 START TEST app_repeat 00:07:48.521 ************************************ 00:07:48.521 10:18:20 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:48.521 10:18:20 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.521 10:18:20 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:48.521 10:18:20 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:48.521 10:18:20 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:48.521 10:18:20 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:48.521 10:18:20 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:48.521 10:18:20 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:48.521 10:18:20 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2422343 00:07:48.521 10:18:20 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:48.522 10:18:20 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:48.522 10:18:20 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2422343' 00:07:48.522 Process app_repeat pid: 2422343 00:07:48.522 10:18:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:48.522 10:18:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:48.522 spdk_app_start Round 0 00:07:48.522 10:18:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2422343 /var/tmp/spdk-nbd.sock 00:07:48.522 10:18:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2422343 ']' 00:07:48.522 10:18:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:48.522 10:18:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.522 10:18:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:48.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:48.522 10:18:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.522 10:18:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:48.522 [2024-12-09 10:18:20.754370] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:07:48.522 [2024-12-09 10:18:20.754463] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2422343 ] 00:07:48.522 [2024-12-09 10:18:20.820676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:48.522 [2024-12-09 10:18:20.879440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.522 [2024-12-09 10:18:20.879445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.779 10:18:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.779 10:18:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:48.779 10:18:21 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:49.036 Malloc0 00:07:49.036 10:18:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:49.293 Malloc1 00:07:49.293 10:18:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:49.293 10:18:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:49.550 /dev/nbd0 00:07:49.550 10:18:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:49.550 10:18:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:49.550 1+0 records in 00:07:49.550 1+0 records out 00:07:49.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204583 s, 20.0 MB/s 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.550 10:18:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:49.550 10:18:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.550 10:18:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:49.550 10:18:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:49.806 /dev/nbd1 00:07:49.806 10:18:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:49.806 10:18:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:49.806 1+0 records in 00:07:49.806 1+0 records out 00:07:49.806 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217031 s, 18.9 MB/s 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:49.806 10:18:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:49.806 10:18:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:49.806 10:18:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:49.806 10:18:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:49.806 10:18:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:49.806 10:18:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:50.370 { 00:07:50.370 "nbd_device": "/dev/nbd0", 00:07:50.370 "bdev_name": "Malloc0" 00:07:50.370 }, 00:07:50.370 { 00:07:50.370 "nbd_device": "/dev/nbd1", 00:07:50.370 "bdev_name": "Malloc1" 00:07:50.370 } 00:07:50.370 ]' 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:50.370 { 00:07:50.370 "nbd_device": "/dev/nbd0", 00:07:50.370 "bdev_name": "Malloc0" 00:07:50.370 }, 00:07:50.370 { 00:07:50.370 "nbd_device": "/dev/nbd1", 00:07:50.370 "bdev_name": "Malloc1" 00:07:50.370 } 00:07:50.370 ]' 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:50.370 /dev/nbd1' 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:50.370 /dev/nbd1' 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:50.370 256+0 records in 00:07:50.370 256+0 records out 00:07:50.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503506 s, 208 MB/s 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:50.370 256+0 records in 00:07:50.370 256+0 records out 00:07:50.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201439 s, 52.1 MB/s 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:50.370 256+0 records in 00:07:50.370 256+0 records out 00:07:50.370 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225276 s, 46.5 MB/s 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.370 10:18:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:50.627 10:18:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:50.627 10:18:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:50.627 10:18:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:50.627 10:18:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.627 10:18:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.627 10:18:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:50.627 10:18:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:50.627 10:18:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.627 10:18:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.627 10:18:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:50.883 10:18:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:50.883 10:18:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:50.883 10:18:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:50.884 10:18:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.884 10:18:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.884 10:18:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:50.884 10:18:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:50.884 10:18:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.884 10:18:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:50.884 10:18:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:50.884 10:18:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:51.184 10:18:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:51.184 10:18:23 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:51.442 10:18:23 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:51.699 [2024-12-09 10:18:24.076464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:51.699 [2024-12-09 10:18:24.132092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.699 [2024-12-09 10:18:24.132092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.956 [2024-12-09 10:18:24.187504] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:51.956 [2024-12-09 10:18:24.187567] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:54.477 10:18:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:54.477 10:18:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:54.477 spdk_app_start Round 1 00:07:54.477 10:18:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2422343 /var/tmp/spdk-nbd.sock 00:07:54.477 10:18:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2422343 ']' 00:07:54.477 10:18:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:54.477 10:18:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.477 10:18:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:54.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:54.477 10:18:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.477 10:18:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:54.735 10:18:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.735 10:18:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:54.735 10:18:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:54.992 Malloc0 00:07:54.992 10:18:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:55.250 Malloc1 00:07:55.250 10:18:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.250 10:18:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:55.892 /dev/nbd0 00:07:55.892 10:18:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:55.892 10:18:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:55.892 10:18:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:55.892 10:18:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:55.892 10:18:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:55.892 10:18:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:55.892 10:18:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:55.892 10:18:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:55.892 10:18:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:55.892 10:18:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:55.893 10:18:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:55.893 1+0 records in 00:07:55.893 1+0 records out 00:07:55.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179889 s, 22.8 MB/s 00:07:55.893 10:18:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:55.893 10:18:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:55.893 10:18:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:55.893 10:18:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:55.893 10:18:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:55.893 10:18:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:55.893 10:18:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:55.893 10:18:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:55.893 /dev/nbd1 00:07:56.174 10:18:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:56.174 10:18:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:56.174 1+0 records in 00:07:56.174 1+0 records out 00:07:56.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195255 s, 21.0 MB/s 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:56.174 10:18:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:56.174 10:18:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:56.174 10:18:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:56.174 10:18:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:56.174 10:18:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.174 10:18:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:56.174 10:18:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:56.174 { 00:07:56.174 "nbd_device": "/dev/nbd0", 00:07:56.174 "bdev_name": "Malloc0" 00:07:56.174 }, 00:07:56.174 { 00:07:56.174 "nbd_device": "/dev/nbd1", 00:07:56.174 "bdev_name": "Malloc1" 00:07:56.174 } 00:07:56.174 ]' 00:07:56.174 10:18:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:56.174 { 00:07:56.174 "nbd_device": "/dev/nbd0", 00:07:56.174 "bdev_name": "Malloc0" 00:07:56.174 }, 00:07:56.174 { 00:07:56.174 "nbd_device": "/dev/nbd1", 00:07:56.174 "bdev_name": "Malloc1" 00:07:56.174 } 00:07:56.174 ]' 00:07:56.174 10:18:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:56.431 10:18:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:56.432 /dev/nbd1' 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:56.432 /dev/nbd1' 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:56.432 256+0 records in 00:07:56.432 256+0 records out 00:07:56.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406863 s, 258 MB/s 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:56.432 256+0 records in 00:07:56.432 256+0 records out 00:07:56.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212906 s, 49.3 MB/s 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:56.432 256+0 records in 00:07:56.432 256+0 records out 00:07:56.432 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233308 s, 44.9 MB/s 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.432 10:18:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:56.689 10:18:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.689 10:18:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.689 10:18:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.689 10:18:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.689 10:18:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.689 10:18:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:56.689 10:18:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:56.689 10:18:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.689 10:18:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.689 10:18:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:56.946 10:18:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:56.946 10:18:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:56.946 10:18:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:56.946 10:18:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.946 10:18:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.946 10:18:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:56.946 10:18:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:56.946 10:18:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.946 10:18:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:56.946 10:18:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.946 10:18:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:57.207 10:18:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:57.207 10:18:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:57.770 10:18:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:57.770 [2024-12-09 10:18:30.127465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:57.770 [2024-12-09 10:18:30.183384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.770 [2024-12-09 10:18:30.183384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.027 [2024-12-09 10:18:30.242961] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:58.027 [2024-12-09 10:18:30.243021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:00.548 10:18:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:00.548 10:18:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:00.548 spdk_app_start Round 2 00:08:00.548 10:18:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2422343 /var/tmp/spdk-nbd.sock 00:08:00.548 10:18:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2422343 ']' 00:08:00.548 10:18:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:00.548 10:18:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.548 10:18:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:00.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:00.548 10:18:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.549 10:18:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:00.806 10:18:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.806 10:18:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:00.806 10:18:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:01.064 Malloc0 00:08:01.064 10:18:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:01.321 Malloc1 00:08:01.321 10:18:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:01.321 10:18:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.321 10:18:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:01.321 10:18:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:01.321 10:18:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.321 10:18:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:01.321 10:18:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:01.322 10:18:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.322 10:18:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:01.322 10:18:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:01.322 10:18:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:01.322 10:18:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:01.322 10:18:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:01.322 10:18:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:01.322 10:18:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:01.322 10:18:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:01.887 /dev/nbd0 00:08:01.887 10:18:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:01.887 10:18:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:01.887 10:18:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:01.887 10:18:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:01.887 10:18:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.887 10:18:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.887 10:18:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:01.887 10:18:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:01.887 10:18:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.887 10:18:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.887 10:18:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:01.887 1+0 records in 00:08:01.887 1+0 records out 00:08:01.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195521 s, 20.9 MB/s 00:08:01.888 10:18:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:01.888 10:18:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:01.888 10:18:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:01.888 10:18:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.888 10:18:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:01.888 10:18:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.888 10:18:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:01.888 10:18:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:02.145 /dev/nbd1 00:08:02.145 10:18:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:02.145 10:18:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:02.145 1+0 records in 00:08:02.145 1+0 records out 00:08:02.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207501 s, 19.7 MB/s 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.145 10:18:34 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:02.145 10:18:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.145 10:18:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:02.145 10:18:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:02.145 10:18:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.145 10:18:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:02.403 { 00:08:02.403 "nbd_device": "/dev/nbd0", 00:08:02.403 "bdev_name": "Malloc0" 00:08:02.403 }, 00:08:02.403 { 00:08:02.403 "nbd_device": "/dev/nbd1", 00:08:02.403 "bdev_name": "Malloc1" 00:08:02.403 } 00:08:02.403 ]' 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:02.403 { 00:08:02.403 "nbd_device": "/dev/nbd0", 00:08:02.403 "bdev_name": "Malloc0" 00:08:02.403 }, 00:08:02.403 { 00:08:02.403 "nbd_device": "/dev/nbd1", 00:08:02.403 "bdev_name": "Malloc1" 00:08:02.403 } 00:08:02.403 ]' 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:02.403 /dev/nbd1' 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:02.403 /dev/nbd1' 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:02.403 256+0 records in 00:08:02.403 256+0 records out 00:08:02.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516782 s, 203 MB/s 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:02.403 256+0 records in 00:08:02.403 256+0 records out 00:08:02.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199545 s, 52.5 MB/s 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:02.403 256+0 records in 00:08:02.403 256+0 records out 00:08:02.403 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222596 s, 47.1 MB/s 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.403 10:18:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:02.660 10:18:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:02.660 10:18:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:02.660 10:18:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:02.660 10:18:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:02.660 10:18:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:02.660 10:18:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:02.660 10:18:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:02.660 10:18:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:02.660 10:18:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:02.660 10:18:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:03.223 10:18:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.481 10:18:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:03.481 10:18:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:03.481 10:18:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.481 10:18:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:03.481 10:18:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:03.481 10:18:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:03.481 10:18:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:03.481 10:18:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:03.481 10:18:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:03.481 10:18:35 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:03.737 10:18:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:03.994 [2024-12-09 10:18:36.181217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:03.994 [2024-12-09 10:18:36.236674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.994 [2024-12-09 10:18:36.236680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.994 [2024-12-09 10:18:36.295931] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:03.994 [2024-12-09 10:18:36.296003] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:07.267 10:18:38 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2422343 /var/tmp/spdk-nbd.sock 00:08:07.267 10:18:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2422343 ']' 00:08:07.267 10:18:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:07.267 10:18:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.267 10:18:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:07.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:07.267 10:18:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.267 10:18:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:07.267 10:18:39 event.app_repeat -- event/event.sh@39 -- # killprocess 2422343 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2422343 ']' 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2422343 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2422343 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2422343' 00:08:07.267 killing process with pid 2422343 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2422343 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2422343 00:08:07.267 spdk_app_start is called in Round 0. 00:08:07.267 Shutdown signal received, stop current app iteration 00:08:07.267 Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 reinitialization... 00:08:07.267 spdk_app_start is called in Round 1. 00:08:07.267 Shutdown signal received, stop current app iteration 00:08:07.267 Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 reinitialization... 00:08:07.267 spdk_app_start is called in Round 2. 00:08:07.267 Shutdown signal received, stop current app iteration 00:08:07.267 Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 reinitialization... 00:08:07.267 spdk_app_start is called in Round 3. 00:08:07.267 Shutdown signal received, stop current app iteration 00:08:07.267 10:18:39 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:07.267 10:18:39 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:07.267 00:08:07.267 real 0m18.748s 00:08:07.267 user 0m41.421s 00:08:07.267 sys 0m3.259s 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.267 10:18:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:07.267 ************************************ 00:08:07.267 END TEST app_repeat 00:08:07.267 ************************************ 00:08:07.267 10:18:39 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:07.267 10:18:39 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:07.267 10:18:39 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.267 10:18:39 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.267 10:18:39 event -- common/autotest_common.sh@10 -- # set +x 00:08:07.267 ************************************ 00:08:07.267 START TEST cpu_locks 00:08:07.267 ************************************ 00:08:07.267 10:18:39 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:07.267 * Looking for test storage... 00:08:07.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:07.267 10:18:39 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:07.267 10:18:39 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:08:07.267 10:18:39 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:07.267 10:18:39 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.267 10:18:39 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:07.267 10:18:39 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.267 10:18:39 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.267 --rc genhtml_branch_coverage=1 00:08:07.267 --rc genhtml_function_coverage=1 00:08:07.267 --rc genhtml_legend=1 00:08:07.267 --rc geninfo_all_blocks=1 00:08:07.267 --rc geninfo_unexecuted_blocks=1 00:08:07.267 00:08:07.267 ' 00:08:07.267 10:18:39 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.267 --rc genhtml_branch_coverage=1 00:08:07.267 --rc genhtml_function_coverage=1 00:08:07.267 --rc genhtml_legend=1 00:08:07.267 --rc geninfo_all_blocks=1 00:08:07.267 --rc geninfo_unexecuted_blocks=1 00:08:07.267 00:08:07.267 ' 00:08:07.267 10:18:39 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.267 --rc genhtml_branch_coverage=1 00:08:07.267 --rc genhtml_function_coverage=1 00:08:07.267 --rc genhtml_legend=1 00:08:07.267 --rc geninfo_all_blocks=1 00:08:07.267 --rc geninfo_unexecuted_blocks=1 00:08:07.267 00:08:07.267 ' 00:08:07.267 10:18:39 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:07.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.267 --rc genhtml_branch_coverage=1 00:08:07.267 --rc genhtml_function_coverage=1 00:08:07.267 --rc genhtml_legend=1 00:08:07.267 --rc geninfo_all_blocks=1 00:08:07.267 --rc geninfo_unexecuted_blocks=1 00:08:07.267 00:08:07.267 ' 00:08:07.267 10:18:39 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:07.268 10:18:39 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:07.268 10:18:39 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:07.268 10:18:39 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:07.268 10:18:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.268 10:18:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.268 10:18:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.268 ************************************ 00:08:07.268 START TEST default_locks 00:08:07.268 ************************************ 00:08:07.268 10:18:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:07.268 10:18:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2424829 00:08:07.268 10:18:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:07.268 10:18:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2424829 00:08:07.268 10:18:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2424829 ']' 00:08:07.268 10:18:39 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.268 10:18:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.268 10:18:39 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.268 10:18:39 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.268 10:18:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:07.530 [2024-12-09 10:18:39.754420] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:07.530 [2024-12-09 10:18:39.754514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2424829 ] 00:08:07.530 [2024-12-09 10:18:39.820992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.530 [2024-12-09 10:18:39.879881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.793 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.793 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:07.793 10:18:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2424829 00:08:07.793 10:18:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2424829 00:08:07.793 10:18:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:08.050 lslocks: write error 00:08:08.050 10:18:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2424829 00:08:08.050 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2424829 ']' 00:08:08.050 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2424829 00:08:08.050 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:08.050 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.050 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2424829 00:08:08.050 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.050 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.050 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2424829' 00:08:08.050 killing process with pid 2424829 00:08:08.050 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2424829 00:08:08.050 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2424829 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2424829 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2424829 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2424829 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2424829 ']' 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2424829) - No such process 00:08:08.615 ERROR: process (pid: 2424829) is no longer running 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:08.615 00:08:08.615 real 0m1.197s 00:08:08.615 user 0m1.146s 00:08:08.615 sys 0m0.520s 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.615 10:18:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.615 ************************************ 00:08:08.615 END TEST default_locks 00:08:08.615 ************************************ 00:08:08.615 10:18:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:08.615 10:18:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:08.615 10:18:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.615 10:18:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:08.615 ************************************ 00:08:08.615 START TEST default_locks_via_rpc 00:08:08.615 ************************************ 00:08:08.615 10:18:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:08.615 10:18:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2424993 00:08:08.615 10:18:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:08.615 10:18:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2424993 00:08:08.615 10:18:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2424993 ']' 00:08:08.615 10:18:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.615 10:18:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.615 10:18:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.615 10:18:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.615 10:18:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:08.615 [2024-12-09 10:18:41.001184] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:08.615 [2024-12-09 10:18:41.001284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2424993 ] 00:08:08.873 [2024-12-09 10:18:41.068397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.873 [2024-12-09 10:18:41.126865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.145 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.145 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2424993 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2424993 00:08:09.146 10:18:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:09.403 10:18:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2424993 00:08:09.403 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2424993 ']' 00:08:09.403 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2424993 00:08:09.403 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:09.403 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.403 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2424993 00:08:09.403 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.403 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.403 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2424993' 00:08:09.403 killing process with pid 2424993 00:08:09.403 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2424993 00:08:09.403 10:18:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2424993 00:08:09.969 00:08:09.969 real 0m1.219s 00:08:09.969 user 0m1.186s 00:08:09.969 sys 0m0.497s 00:08:09.969 10:18:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.969 10:18:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.969 ************************************ 00:08:09.969 END TEST default_locks_via_rpc 00:08:09.969 ************************************ 00:08:09.969 10:18:42 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:09.969 10:18:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:09.969 10:18:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.969 10:18:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:09.969 ************************************ 00:08:09.969 START TEST non_locking_app_on_locked_coremask 00:08:09.969 ************************************ 00:08:09.969 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:08:09.969 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2425270 00:08:09.969 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:09.969 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2425270 /var/tmp/spdk.sock 00:08:09.969 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2425270 ']' 00:08:09.969 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.969 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.970 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.970 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.970 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:09.970 [2024-12-09 10:18:42.269876] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:09.970 [2024-12-09 10:18:42.269946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425270 ] 00:08:09.970 [2024-12-09 10:18:42.331301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.970 [2024-12-09 10:18:42.385396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.228 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.228 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:10.228 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2425282 00:08:10.228 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:10.228 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2425282 /var/tmp/spdk2.sock 00:08:10.228 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2425282 ']' 00:08:10.228 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:10.228 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.228 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:10.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:10.228 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.228 10:18:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:10.486 [2024-12-09 10:18:42.700894] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:10.486 [2024-12-09 10:18:42.700963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425282 ] 00:08:10.486 [2024-12-09 10:18:42.796146] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:10.486 [2024-12-09 10:18:42.796173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.486 [2024-12-09 10:18:42.907411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.418 10:18:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.418 10:18:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:11.418 10:18:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2425270 00:08:11.418 10:18:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2425270 00:08:11.418 10:18:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:11.982 lslocks: write error 00:08:11.982 10:18:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2425270 00:08:11.982 10:18:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2425270 ']' 00:08:11.982 10:18:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2425270 00:08:11.982 10:18:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:11.982 10:18:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.982 10:18:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2425270 00:08:11.982 10:18:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.982 10:18:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.982 10:18:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2425270' 00:08:11.982 killing process with pid 2425270 00:08:11.982 10:18:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2425270 00:08:11.982 10:18:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2425270 00:08:12.914 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2425282 00:08:12.914 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2425282 ']' 00:08:12.914 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2425282 00:08:12.914 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:12.914 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:12.914 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2425282 00:08:12.914 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:12.914 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:12.914 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2425282' 00:08:12.914 killing process with pid 2425282 00:08:12.914 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2425282 00:08:12.914 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2425282 00:08:13.172 00:08:13.172 real 0m3.328s 00:08:13.172 user 0m3.557s 00:08:13.172 sys 0m0.983s 00:08:13.172 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.172 10:18:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:13.172 ************************************ 00:08:13.172 END TEST non_locking_app_on_locked_coremask 00:08:13.172 ************************************ 00:08:13.172 10:18:45 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:13.172 10:18:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.172 10:18:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.172 10:18:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:13.172 ************************************ 00:08:13.172 START TEST locking_app_on_unlocked_coremask 00:08:13.172 ************************************ 00:08:13.172 10:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:08:13.172 10:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2425713 00:08:13.172 10:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:13.172 10:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2425713 /var/tmp/spdk.sock 00:08:13.172 10:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2425713 ']' 00:08:13.172 10:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.172 10:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.172 10:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.172 10:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.172 10:18:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:13.429 [2024-12-09 10:18:45.644988] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:13.429 [2024-12-09 10:18:45.645091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425713 ] 00:08:13.429 [2024-12-09 10:18:45.708365] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:13.429 [2024-12-09 10:18:45.708402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.429 [2024-12-09 10:18:45.761995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.686 10:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.686 10:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:13.686 10:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2425718 00:08:13.686 10:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:13.686 10:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2425718 /var/tmp/spdk2.sock 00:08:13.686 10:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2425718 ']' 00:08:13.686 10:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:13.686 10:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.686 10:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:13.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:13.686 10:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.686 10:18:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:13.686 [2024-12-09 10:18:46.085952] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:13.686 [2024-12-09 10:18:46.086036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425718 ] 00:08:13.943 [2024-12-09 10:18:46.183798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.943 [2024-12-09 10:18:46.296216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.874 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:14.874 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:14.874 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2425718 00:08:14.874 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2425718 00:08:14.874 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:15.131 lslocks: write error 00:08:15.131 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2425713 00:08:15.131 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2425713 ']' 00:08:15.131 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2425713 00:08:15.131 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:15.131 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.131 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2425713 00:08:15.131 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.131 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.131 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2425713' 00:08:15.131 killing process with pid 2425713 00:08:15.131 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2425713 00:08:15.131 10:18:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2425713 00:08:16.066 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2425718 00:08:16.066 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2425718 ']' 00:08:16.066 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2425718 00:08:16.066 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:16.066 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.066 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2425718 00:08:16.066 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.066 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.066 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2425718' 00:08:16.066 killing process with pid 2425718 00:08:16.066 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2425718 00:08:16.066 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2425718 00:08:16.632 00:08:16.632 real 0m3.297s 00:08:16.632 user 0m3.525s 00:08:16.632 sys 0m1.013s 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.632 ************************************ 00:08:16.632 END TEST locking_app_on_unlocked_coremask 00:08:16.632 ************************************ 00:08:16.632 10:18:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:16.632 10:18:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:16.632 10:18:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.632 10:18:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:16.632 ************************************ 00:08:16.632 START TEST locking_app_on_locked_coremask 00:08:16.632 ************************************ 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2426149 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2426149 /var/tmp/spdk.sock 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2426149 ']' 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.632 10:18:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:16.632 [2024-12-09 10:18:48.994177] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:16.632 [2024-12-09 10:18:48.994284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426149 ] 00:08:16.632 [2024-12-09 10:18:49.062206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.890 [2024-12-09 10:18:49.122414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2426158 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2426158 /var/tmp/spdk2.sock 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2426158 /var/tmp/spdk2.sock 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2426158 /var/tmp/spdk2.sock 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2426158 ']' 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:17.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.149 10:18:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:17.149 [2024-12-09 10:18:49.449196] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:17.149 [2024-12-09 10:18:49.449272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426158 ] 00:08:17.149 [2024-12-09 10:18:49.546508] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2426149 has claimed it. 00:08:17.149 [2024-12-09 10:18:49.546565] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:17.715 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2426158) - No such process 00:08:17.715 ERROR: process (pid: 2426158) is no longer running 00:08:17.715 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.715 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:17.715 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:17.715 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:17.715 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:17.715 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:17.715 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2426149 00:08:17.972 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2426149 00:08:17.972 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:18.230 lslocks: write error 00:08:18.230 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2426149 00:08:18.230 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2426149 ']' 00:08:18.230 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2426149 00:08:18.230 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:08:18.230 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.230 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2426149 00:08:18.230 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.230 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.230 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2426149' 00:08:18.230 killing process with pid 2426149 00:08:18.230 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2426149 00:08:18.230 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2426149 00:08:18.795 00:08:18.795 real 0m2.028s 00:08:18.795 user 0m2.253s 00:08:18.796 sys 0m0.611s 00:08:18.796 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.796 10:18:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.796 ************************************ 00:08:18.796 END TEST locking_app_on_locked_coremask 00:08:18.796 ************************************ 00:08:18.796 10:18:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:18.796 10:18:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.796 10:18:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.796 10:18:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:18.796 ************************************ 00:08:18.796 START TEST locking_overlapped_coremask 00:08:18.796 ************************************ 00:08:18.796 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:08:18.796 10:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2426325 00:08:18.796 10:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:18.796 10:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2426325 /var/tmp/spdk.sock 00:08:18.796 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2426325 ']' 00:08:18.796 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.796 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.796 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.796 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.796 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:18.796 [2024-12-09 10:18:51.075042] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:18.796 [2024-12-09 10:18:51.075129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426325 ] 00:08:18.796 [2024-12-09 10:18:51.148657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:18.796 [2024-12-09 10:18:51.207321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.796 [2024-12-09 10:18:51.207385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.796 [2024-12-09 10:18:51.207388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2426456 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2426456 /var/tmp/spdk2.sock 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2426456 /var/tmp/spdk2.sock 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2426456 /var/tmp/spdk2.sock 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2426456 ']' 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:19.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.054 10:18:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:19.311 [2024-12-09 10:18:51.540608] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:19.311 [2024-12-09 10:18:51.540707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426456 ] 00:08:19.311 [2024-12-09 10:18:51.644103] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2426325 has claimed it. 00:08:19.311 [2024-12-09 10:18:51.644194] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:19.876 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2426456) - No such process 00:08:19.876 ERROR: process (pid: 2426456) is no longer running 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2426325 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2426325 ']' 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2426325 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2426325 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2426325' 00:08:19.876 killing process with pid 2426325 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2426325 00:08:19.876 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2426325 00:08:20.442 00:08:20.442 real 0m1.736s 00:08:20.442 user 0m4.751s 00:08:20.442 sys 0m0.470s 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 ************************************ 00:08:20.442 END TEST locking_overlapped_coremask 00:08:20.442 ************************************ 00:08:20.442 10:18:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:20.442 10:18:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:20.442 10:18:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.442 10:18:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 ************************************ 00:08:20.442 START TEST locking_overlapped_coremask_via_rpc 00:08:20.442 ************************************ 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2426620 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2426620 /var/tmp/spdk.sock 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2426620 ']' 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.442 10:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 [2024-12-09 10:18:52.860308] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:20.442 [2024-12-09 10:18:52.860413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426620 ] 00:08:20.700 [2024-12-09 10:18:52.924051] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:20.700 [2024-12-09 10:18:52.924091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:20.700 [2024-12-09 10:18:52.979256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.700 [2024-12-09 10:18:52.979285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.700 [2024-12-09 10:18:52.979289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.957 10:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.957 10:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:20.957 10:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2426629 00:08:20.957 10:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:20.957 10:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2426629 /var/tmp/spdk2.sock 00:08:20.957 10:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2426629 ']' 00:08:20.957 10:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:20.957 10:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.957 10:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:20.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:20.957 10:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.957 10:18:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:20.957 [2024-12-09 10:18:53.310954] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:20.957 [2024-12-09 10:18:53.311053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2426629 ] 00:08:21.214 [2024-12-09 10:18:53.420730] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:21.214 [2024-12-09 10:18:53.420768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:21.214 [2024-12-09 10:18:53.541719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:21.214 [2024-12-09 10:18:53.545209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:21.214 [2024-12-09 10:18:53.545212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.146 [2024-12-09 10:18:54.304256] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2426620 has claimed it. 00:08:22.146 request: 00:08:22.146 { 00:08:22.146 "method": "framework_enable_cpumask_locks", 00:08:22.146 "req_id": 1 00:08:22.146 } 00:08:22.146 Got JSON-RPC error response 00:08:22.146 response: 00:08:22.146 { 00:08:22.146 "code": -32603, 00:08:22.146 "message": "Failed to claim CPU core: 2" 00:08:22.146 } 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2426620 /var/tmp/spdk.sock 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2426620 ']' 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.146 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.403 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.403 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:22.403 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2426629 /var/tmp/spdk2.sock 00:08:22.403 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2426629 ']' 00:08:22.403 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:22.403 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.403 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:22.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:22.403 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.403 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.660 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.660 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:22.660 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:22.660 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:22.660 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:22.660 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:22.660 00:08:22.660 real 0m2.071s 00:08:22.660 user 0m1.145s 00:08:22.660 sys 0m0.183s 00:08:22.660 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.660 10:18:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:22.660 ************************************ 00:08:22.660 END TEST locking_overlapped_coremask_via_rpc 00:08:22.660 ************************************ 00:08:22.660 10:18:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:22.660 10:18:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2426620 ]] 00:08:22.660 10:18:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2426620 00:08:22.660 10:18:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2426620 ']' 00:08:22.660 10:18:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2426620 00:08:22.660 10:18:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:22.660 10:18:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:22.660 10:18:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2426620 00:08:22.660 10:18:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:22.661 10:18:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:22.661 10:18:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2426620' 00:08:22.661 killing process with pid 2426620 00:08:22.661 10:18:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2426620 00:08:22.661 10:18:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2426620 00:08:23.224 10:18:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2426629 ]] 00:08:23.224 10:18:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2426629 00:08:23.224 10:18:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2426629 ']' 00:08:23.224 10:18:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2426629 00:08:23.225 10:18:55 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:08:23.225 10:18:55 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.225 10:18:55 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2426629 00:08:23.225 10:18:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:23.225 10:18:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:23.225 10:18:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2426629' 00:08:23.225 killing process with pid 2426629 00:08:23.225 10:18:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2426629 00:08:23.225 10:18:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2426629 00:08:23.790 10:18:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:23.790 10:18:55 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:23.790 10:18:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2426620 ]] 00:08:23.790 10:18:55 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2426620 00:08:23.790 10:18:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2426620 ']' 00:08:23.790 10:18:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2426620 00:08:23.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2426620) - No such process 00:08:23.790 10:18:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2426620 is not found' 00:08:23.790 Process with pid 2426620 is not found 00:08:23.790 10:18:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2426629 ]] 00:08:23.790 10:18:55 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2426629 00:08:23.790 10:18:55 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2426629 ']' 00:08:23.790 10:18:55 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2426629 00:08:23.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2426629) - No such process 00:08:23.790 10:18:55 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2426629 is not found' 00:08:23.790 Process with pid 2426629 is not found 00:08:23.790 10:18:55 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:23.790 00:08:23.790 real 0m16.410s 00:08:23.790 user 0m29.676s 00:08:23.790 sys 0m5.236s 00:08:23.790 10:18:55 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.790 10:18:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:23.790 ************************************ 00:08:23.790 END TEST cpu_locks 00:08:23.790 ************************************ 00:08:23.790 00:08:23.790 real 0m41.246s 00:08:23.790 user 1m20.386s 00:08:23.790 sys 0m9.322s 00:08:23.790 10:18:55 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:23.790 10:18:55 event -- common/autotest_common.sh@10 -- # set +x 00:08:23.790 ************************************ 00:08:23.790 END TEST event 00:08:23.790 ************************************ 00:08:23.790 10:18:55 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:23.790 10:18:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:23.790 10:18:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.790 10:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:23.790 ************************************ 00:08:23.790 START TEST thread 00:08:23.790 ************************************ 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:23.791 * Looking for test storage... 00:08:23.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:23.791 10:18:56 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:23.791 10:18:56 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:23.791 10:18:56 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:23.791 10:18:56 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:23.791 10:18:56 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:23.791 10:18:56 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:23.791 10:18:56 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:23.791 10:18:56 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:23.791 10:18:56 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:23.791 10:18:56 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:23.791 10:18:56 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:23.791 10:18:56 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:23.791 10:18:56 thread -- scripts/common.sh@345 -- # : 1 00:08:23.791 10:18:56 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:23.791 10:18:56 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:23.791 10:18:56 thread -- scripts/common.sh@365 -- # decimal 1 00:08:23.791 10:18:56 thread -- scripts/common.sh@353 -- # local d=1 00:08:23.791 10:18:56 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:23.791 10:18:56 thread -- scripts/common.sh@355 -- # echo 1 00:08:23.791 10:18:56 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:23.791 10:18:56 thread -- scripts/common.sh@366 -- # decimal 2 00:08:23.791 10:18:56 thread -- scripts/common.sh@353 -- # local d=2 00:08:23.791 10:18:56 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:23.791 10:18:56 thread -- scripts/common.sh@355 -- # echo 2 00:08:23.791 10:18:56 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:23.791 10:18:56 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:23.791 10:18:56 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:23.791 10:18:56 thread -- scripts/common.sh@368 -- # return 0 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:23.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.791 --rc genhtml_branch_coverage=1 00:08:23.791 --rc genhtml_function_coverage=1 00:08:23.791 --rc genhtml_legend=1 00:08:23.791 --rc geninfo_all_blocks=1 00:08:23.791 --rc geninfo_unexecuted_blocks=1 00:08:23.791 00:08:23.791 ' 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:23.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.791 --rc genhtml_branch_coverage=1 00:08:23.791 --rc genhtml_function_coverage=1 00:08:23.791 --rc genhtml_legend=1 00:08:23.791 --rc geninfo_all_blocks=1 00:08:23.791 --rc geninfo_unexecuted_blocks=1 00:08:23.791 00:08:23.791 ' 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:23.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.791 --rc genhtml_branch_coverage=1 00:08:23.791 --rc genhtml_function_coverage=1 00:08:23.791 --rc genhtml_legend=1 00:08:23.791 --rc geninfo_all_blocks=1 00:08:23.791 --rc geninfo_unexecuted_blocks=1 00:08:23.791 00:08:23.791 ' 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:23.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:23.791 --rc genhtml_branch_coverage=1 00:08:23.791 --rc genhtml_function_coverage=1 00:08:23.791 --rc genhtml_legend=1 00:08:23.791 --rc geninfo_all_blocks=1 00:08:23.791 --rc geninfo_unexecuted_blocks=1 00:08:23.791 00:08:23.791 ' 00:08:23.791 10:18:56 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:23.791 10:18:56 thread -- common/autotest_common.sh@10 -- # set +x 00:08:23.791 ************************************ 00:08:23.791 START TEST thread_poller_perf 00:08:23.791 ************************************ 00:08:23.791 10:18:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:23.791 [2024-12-09 10:18:56.203333] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:23.791 [2024-12-09 10:18:56.203400] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427127 ] 00:08:24.049 [2024-12-09 10:18:56.270540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.049 [2024-12-09 10:18:56.330042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.049 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:25.421 [2024-12-09T09:18:57.862Z] ====================================== 00:08:25.421 [2024-12-09T09:18:57.862Z] busy:2706653427 (cyc) 00:08:25.421 [2024-12-09T09:18:57.862Z] total_run_count: 365000 00:08:25.421 [2024-12-09T09:18:57.862Z] tsc_hz: 2700000000 (cyc) 00:08:25.421 [2024-12-09T09:18:57.862Z] ====================================== 00:08:25.421 [2024-12-09T09:18:57.862Z] poller_cost: 7415 (cyc), 2746 (nsec) 00:08:25.421 00:08:25.421 real 0m1.247s 00:08:25.421 user 0m1.176s 00:08:25.421 sys 0m0.066s 00:08:25.421 10:18:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.421 10:18:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:25.421 ************************************ 00:08:25.421 END TEST thread_poller_perf 00:08:25.421 ************************************ 00:08:25.421 10:18:57 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:25.421 10:18:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:25.421 10:18:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.421 10:18:57 thread -- common/autotest_common.sh@10 -- # set +x 00:08:25.421 ************************************ 00:08:25.421 START TEST thread_poller_perf 00:08:25.421 ************************************ 00:08:25.421 10:18:57 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:25.421 [2024-12-09 10:18:57.500809] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:25.421 [2024-12-09 10:18:57.500878] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427285 ] 00:08:25.421 [2024-12-09 10:18:57.567208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.421 [2024-12-09 10:18:57.620642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.421 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:26.354 [2024-12-09T09:18:58.795Z] ====================================== 00:08:26.354 [2024-12-09T09:18:58.795Z] busy:2702181261 (cyc) 00:08:26.354 [2024-12-09T09:18:58.795Z] total_run_count: 4454000 00:08:26.354 [2024-12-09T09:18:58.795Z] tsc_hz: 2700000000 (cyc) 00:08:26.354 [2024-12-09T09:18:58.795Z] ====================================== 00:08:26.354 [2024-12-09T09:18:58.795Z] poller_cost: 606 (cyc), 224 (nsec) 00:08:26.354 00:08:26.354 real 0m1.233s 00:08:26.354 user 0m1.165s 00:08:26.354 sys 0m0.064s 00:08:26.354 10:18:58 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.354 10:18:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:26.354 ************************************ 00:08:26.354 END TEST thread_poller_perf 00:08:26.354 ************************************ 00:08:26.354 10:18:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:26.354 00:08:26.354 real 0m2.730s 00:08:26.354 user 0m2.477s 00:08:26.354 sys 0m0.258s 00:08:26.354 10:18:58 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.354 10:18:58 thread -- common/autotest_common.sh@10 -- # set +x 00:08:26.354 ************************************ 00:08:26.354 END TEST thread 00:08:26.354 ************************************ 00:08:26.354 10:18:58 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:26.354 10:18:58 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:26.354 10:18:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:26.354 10:18:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.354 10:18:58 -- common/autotest_common.sh@10 -- # set +x 00:08:26.354 ************************************ 00:08:26.354 START TEST app_cmdline 00:08:26.354 ************************************ 00:08:26.354 10:18:58 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:08:26.612 * Looking for test storage... 00:08:26.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:26.612 10:18:58 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:26.612 10:18:58 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:08:26.612 10:18:58 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:26.612 10:18:58 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.612 10:18:58 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:26.612 10:18:58 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.613 10:18:58 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:26.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.613 --rc genhtml_branch_coverage=1 00:08:26.613 --rc genhtml_function_coverage=1 00:08:26.613 --rc genhtml_legend=1 00:08:26.613 --rc geninfo_all_blocks=1 00:08:26.613 --rc geninfo_unexecuted_blocks=1 00:08:26.613 00:08:26.613 ' 00:08:26.613 10:18:58 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:26.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.613 --rc genhtml_branch_coverage=1 00:08:26.613 --rc genhtml_function_coverage=1 00:08:26.613 --rc genhtml_legend=1 00:08:26.613 --rc geninfo_all_blocks=1 00:08:26.613 --rc geninfo_unexecuted_blocks=1 00:08:26.613 00:08:26.613 ' 00:08:26.613 10:18:58 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:26.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.613 --rc genhtml_branch_coverage=1 00:08:26.613 --rc genhtml_function_coverage=1 00:08:26.613 --rc genhtml_legend=1 00:08:26.613 --rc geninfo_all_blocks=1 00:08:26.613 --rc geninfo_unexecuted_blocks=1 00:08:26.613 00:08:26.613 ' 00:08:26.613 10:18:58 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:26.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.613 --rc genhtml_branch_coverage=1 00:08:26.613 --rc genhtml_function_coverage=1 00:08:26.613 --rc genhtml_legend=1 00:08:26.613 --rc geninfo_all_blocks=1 00:08:26.613 --rc geninfo_unexecuted_blocks=1 00:08:26.613 00:08:26.613 ' 00:08:26.613 10:18:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:26.613 10:18:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2427609 00:08:26.613 10:18:58 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:26.613 10:18:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2427609 00:08:26.613 10:18:58 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2427609 ']' 00:08:26.613 10:18:58 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.613 10:18:58 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.613 10:18:58 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.613 10:18:58 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.613 10:18:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:26.613 [2024-12-09 10:18:58.991824] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:26.613 [2024-12-09 10:18:58.991921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2427609 ] 00:08:26.871 [2024-12-09 10:18:59.057472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.871 [2024-12-09 10:18:59.114780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.129 10:18:59 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.129 10:18:59 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:27.129 10:18:59 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:27.416 { 00:08:27.416 "version": "SPDK v25.01-pre git sha1 6c714c5fe", 00:08:27.416 "fields": { 00:08:27.416 "major": 25, 00:08:27.416 "minor": 1, 00:08:27.416 "patch": 0, 00:08:27.416 "suffix": "-pre", 00:08:27.416 "commit": "6c714c5fe" 00:08:27.416 } 00:08:27.416 } 00:08:27.416 10:18:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:27.416 10:18:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:27.417 10:18:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:27.417 10:18:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:27.417 10:18:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:27.417 10:18:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.417 10:18:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.417 10:18:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:27.417 10:18:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:27.417 10:18:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:27.417 10:18:59 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:27.701 request: 00:08:27.701 { 00:08:27.701 "method": "env_dpdk_get_mem_stats", 00:08:27.701 "req_id": 1 00:08:27.701 } 00:08:27.701 Got JSON-RPC error response 00:08:27.701 response: 00:08:27.701 { 00:08:27.701 "code": -32601, 00:08:27.701 "message": "Method not found" 00:08:27.701 } 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:27.701 10:18:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2427609 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2427609 ']' 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2427609 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2427609 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2427609' 00:08:27.701 killing process with pid 2427609 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@973 -- # kill 2427609 00:08:27.701 10:18:59 app_cmdline -- common/autotest_common.sh@978 -- # wait 2427609 00:08:28.266 00:08:28.266 real 0m1.670s 00:08:28.266 user 0m2.020s 00:08:28.266 sys 0m0.515s 00:08:28.266 10:19:00 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.266 10:19:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 ************************************ 00:08:28.266 END TEST app_cmdline 00:08:28.266 ************************************ 00:08:28.266 10:19:00 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:28.266 10:19:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.266 10:19:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.266 10:19:00 -- common/autotest_common.sh@10 -- # set +x 00:08:28.266 ************************************ 00:08:28.266 START TEST version 00:08:28.266 ************************************ 00:08:28.266 10:19:00 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:28.266 * Looking for test storage... 00:08:28.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:28.266 10:19:00 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.266 10:19:00 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.266 10:19:00 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.266 10:19:00 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.266 10:19:00 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.266 10:19:00 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.266 10:19:00 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.266 10:19:00 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.266 10:19:00 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.266 10:19:00 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.266 10:19:00 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.266 10:19:00 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.266 10:19:00 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.266 10:19:00 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.266 10:19:00 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.266 10:19:00 version -- scripts/common.sh@344 -- # case "$op" in 00:08:28.266 10:19:00 version -- scripts/common.sh@345 -- # : 1 00:08:28.266 10:19:00 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.266 10:19:00 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.266 10:19:00 version -- scripts/common.sh@365 -- # decimal 1 00:08:28.266 10:19:00 version -- scripts/common.sh@353 -- # local d=1 00:08:28.266 10:19:00 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.266 10:19:00 version -- scripts/common.sh@355 -- # echo 1 00:08:28.266 10:19:00 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.266 10:19:00 version -- scripts/common.sh@366 -- # decimal 2 00:08:28.266 10:19:00 version -- scripts/common.sh@353 -- # local d=2 00:08:28.266 10:19:00 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.266 10:19:00 version -- scripts/common.sh@355 -- # echo 2 00:08:28.266 10:19:00 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.266 10:19:00 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.266 10:19:00 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.266 10:19:00 version -- scripts/common.sh@368 -- # return 0 00:08:28.266 10:19:00 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.266 10:19:00 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.266 --rc genhtml_branch_coverage=1 00:08:28.266 --rc genhtml_function_coverage=1 00:08:28.266 --rc genhtml_legend=1 00:08:28.266 --rc geninfo_all_blocks=1 00:08:28.266 --rc geninfo_unexecuted_blocks=1 00:08:28.266 00:08:28.266 ' 00:08:28.266 10:19:00 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.266 --rc genhtml_branch_coverage=1 00:08:28.266 --rc genhtml_function_coverage=1 00:08:28.266 --rc genhtml_legend=1 00:08:28.266 --rc geninfo_all_blocks=1 00:08:28.266 --rc geninfo_unexecuted_blocks=1 00:08:28.266 00:08:28.266 ' 00:08:28.266 10:19:00 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.266 --rc genhtml_branch_coverage=1 00:08:28.266 --rc genhtml_function_coverage=1 00:08:28.266 --rc genhtml_legend=1 00:08:28.266 --rc geninfo_all_blocks=1 00:08:28.266 --rc geninfo_unexecuted_blocks=1 00:08:28.266 00:08:28.266 ' 00:08:28.266 10:19:00 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.266 --rc genhtml_branch_coverage=1 00:08:28.266 --rc genhtml_function_coverage=1 00:08:28.266 --rc genhtml_legend=1 00:08:28.266 --rc geninfo_all_blocks=1 00:08:28.266 --rc geninfo_unexecuted_blocks=1 00:08:28.266 00:08:28.266 ' 00:08:28.266 10:19:00 version -- app/version.sh@17 -- # get_header_version major 00:08:28.266 10:19:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.266 10:19:00 version -- app/version.sh@14 -- # cut -f2 00:08:28.266 10:19:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.266 10:19:00 version -- app/version.sh@17 -- # major=25 00:08:28.266 10:19:00 version -- app/version.sh@18 -- # get_header_version minor 00:08:28.266 10:19:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.266 10:19:00 version -- app/version.sh@14 -- # cut -f2 00:08:28.266 10:19:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.266 10:19:00 version -- app/version.sh@18 -- # minor=1 00:08:28.266 10:19:00 version -- app/version.sh@19 -- # get_header_version patch 00:08:28.266 10:19:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.266 10:19:00 version -- app/version.sh@14 -- # cut -f2 00:08:28.266 10:19:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.266 10:19:00 version -- app/version.sh@19 -- # patch=0 00:08:28.266 10:19:00 version -- app/version.sh@20 -- # get_header_version suffix 00:08:28.266 10:19:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:28.266 10:19:00 version -- app/version.sh@14 -- # cut -f2 00:08:28.266 10:19:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:28.266 10:19:00 version -- app/version.sh@20 -- # suffix=-pre 00:08:28.266 10:19:00 version -- app/version.sh@22 -- # version=25.1 00:08:28.266 10:19:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:28.266 10:19:00 version -- app/version.sh@28 -- # version=25.1rc0 00:08:28.266 10:19:00 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:28.266 10:19:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:28.524 10:19:00 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:28.524 10:19:00 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:28.524 00:08:28.524 real 0m0.203s 00:08:28.524 user 0m0.128s 00:08:28.524 sys 0m0.101s 00:08:28.524 10:19:00 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.524 10:19:00 version -- common/autotest_common.sh@10 -- # set +x 00:08:28.524 ************************************ 00:08:28.524 END TEST version 00:08:28.524 ************************************ 00:08:28.524 10:19:00 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:28.524 10:19:00 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:28.524 10:19:00 -- spdk/autotest.sh@194 -- # uname -s 00:08:28.524 10:19:00 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:28.524 10:19:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:28.524 10:19:00 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:28.524 10:19:00 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:28.524 10:19:00 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:08:28.524 10:19:00 -- spdk/autotest.sh@260 -- # timing_exit lib 00:08:28.524 10:19:00 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:28.524 10:19:00 -- common/autotest_common.sh@10 -- # set +x 00:08:28.524 10:19:00 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:08:28.524 10:19:00 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:08:28.524 10:19:00 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:08:28.524 10:19:00 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:08:28.524 10:19:00 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:08:28.524 10:19:00 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:08:28.524 10:19:00 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:28.524 10:19:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.524 10:19:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.524 10:19:00 -- common/autotest_common.sh@10 -- # set +x 00:08:28.524 ************************************ 00:08:28.524 START TEST nvmf_tcp 00:08:28.524 ************************************ 00:08:28.524 10:19:00 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:28.524 * Looking for test storage... 00:08:28.524 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:28.524 10:19:00 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.524 10:19:00 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.524 10:19:00 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.525 10:19:00 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.525 10:19:00 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:28.525 10:19:00 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.525 10:19:00 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.525 --rc genhtml_branch_coverage=1 00:08:28.525 --rc genhtml_function_coverage=1 00:08:28.525 --rc genhtml_legend=1 00:08:28.525 --rc geninfo_all_blocks=1 00:08:28.525 --rc geninfo_unexecuted_blocks=1 00:08:28.525 00:08:28.525 ' 00:08:28.525 10:19:00 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.525 --rc genhtml_branch_coverage=1 00:08:28.525 --rc genhtml_function_coverage=1 00:08:28.525 --rc genhtml_legend=1 00:08:28.525 --rc geninfo_all_blocks=1 00:08:28.525 --rc geninfo_unexecuted_blocks=1 00:08:28.525 00:08:28.525 ' 00:08:28.525 10:19:00 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.525 --rc genhtml_branch_coverage=1 00:08:28.525 --rc genhtml_function_coverage=1 00:08:28.525 --rc genhtml_legend=1 00:08:28.525 --rc geninfo_all_blocks=1 00:08:28.525 --rc geninfo_unexecuted_blocks=1 00:08:28.525 00:08:28.525 ' 00:08:28.525 10:19:00 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.525 --rc genhtml_branch_coverage=1 00:08:28.525 --rc genhtml_function_coverage=1 00:08:28.525 --rc genhtml_legend=1 00:08:28.525 --rc geninfo_all_blocks=1 00:08:28.525 --rc geninfo_unexecuted_blocks=1 00:08:28.525 00:08:28.525 ' 00:08:28.525 10:19:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:28.525 10:19:00 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:28.525 10:19:00 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:28.525 10:19:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.525 10:19:00 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.525 10:19:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.525 ************************************ 00:08:28.525 START TEST nvmf_target_core 00:08:28.525 ************************************ 00:08:28.525 10:19:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:28.783 * Looking for test storage... 00:08:28.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.783 --rc genhtml_branch_coverage=1 00:08:28.783 --rc genhtml_function_coverage=1 00:08:28.783 --rc genhtml_legend=1 00:08:28.783 --rc geninfo_all_blocks=1 00:08:28.783 --rc geninfo_unexecuted_blocks=1 00:08:28.783 00:08:28.783 ' 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.783 --rc genhtml_branch_coverage=1 00:08:28.783 --rc genhtml_function_coverage=1 00:08:28.783 --rc genhtml_legend=1 00:08:28.783 --rc geninfo_all_blocks=1 00:08:28.783 --rc geninfo_unexecuted_blocks=1 00:08:28.783 00:08:28.783 ' 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.783 --rc genhtml_branch_coverage=1 00:08:28.783 --rc genhtml_function_coverage=1 00:08:28.783 --rc genhtml_legend=1 00:08:28.783 --rc geninfo_all_blocks=1 00:08:28.783 --rc geninfo_unexecuted_blocks=1 00:08:28.783 00:08:28.783 ' 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.783 --rc genhtml_branch_coverage=1 00:08:28.783 --rc genhtml_function_coverage=1 00:08:28.783 --rc genhtml_legend=1 00:08:28.783 --rc geninfo_all_blocks=1 00:08:28.783 --rc geninfo_unexecuted_blocks=1 00:08:28.783 00:08:28.783 ' 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.783 10:19:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.784 ************************************ 00:08:28.784 START TEST nvmf_abort 00:08:28.784 ************************************ 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:08:28.784 * Looking for test storage... 00:08:28.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.784 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:29.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.042 --rc genhtml_branch_coverage=1 00:08:29.042 --rc genhtml_function_coverage=1 00:08:29.042 --rc genhtml_legend=1 00:08:29.042 --rc geninfo_all_blocks=1 00:08:29.042 --rc geninfo_unexecuted_blocks=1 00:08:29.042 00:08:29.042 ' 00:08:29.042 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:29.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.042 --rc genhtml_branch_coverage=1 00:08:29.042 --rc genhtml_function_coverage=1 00:08:29.043 --rc genhtml_legend=1 00:08:29.043 --rc geninfo_all_blocks=1 00:08:29.043 --rc geninfo_unexecuted_blocks=1 00:08:29.043 00:08:29.043 ' 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:29.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.043 --rc genhtml_branch_coverage=1 00:08:29.043 --rc genhtml_function_coverage=1 00:08:29.043 --rc genhtml_legend=1 00:08:29.043 --rc geninfo_all_blocks=1 00:08:29.043 --rc geninfo_unexecuted_blocks=1 00:08:29.043 00:08:29.043 ' 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:29.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.043 --rc genhtml_branch_coverage=1 00:08:29.043 --rc genhtml_function_coverage=1 00:08:29.043 --rc genhtml_legend=1 00:08:29.043 --rc geninfo_all_blocks=1 00:08:29.043 --rc geninfo_unexecuted_blocks=1 00:08:29.043 00:08:29.043 ' 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:29.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:08:29.043 10:19:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:31.575 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:31.575 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:31.575 Found net devices under 0000:09:00.0: cvl_0_0 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.575 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:31.576 Found net devices under 0000:09:00.1: cvl_0_1 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:31.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:31.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:08:31.576 00:08:31.576 --- 10.0.0.2 ping statistics --- 00:08:31.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.576 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:31.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:31.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:08:31.576 00:08:31.576 --- 10.0.0.1 ping statistics --- 00:08:31.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:31.576 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2429703 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2429703 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2429703 ']' 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.576 [2024-12-09 10:19:03.722185] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:31.576 [2024-12-09 10:19:03.722272] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.576 [2024-12-09 10:19:03.792384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:31.576 [2024-12-09 10:19:03.849651] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.576 [2024-12-09 10:19:03.849710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.576 [2024-12-09 10:19:03.849737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.576 [2024-12-09 10:19:03.849748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.576 [2024-12-09 10:19:03.849757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.576 [2024-12-09 10:19:03.851250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.576 [2024-12-09 10:19:03.851312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.576 [2024-12-09 10:19:03.851316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.576 10:19:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.576 [2024-12-09 10:19:03.998198] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.576 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.576 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:08:31.576 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.576 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.834 Malloc0 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.835 Delay0 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.835 [2024-12-09 10:19:04.069764] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.835 10:19:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:08:31.835 [2024-12-09 10:19:04.226261] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:34.366 Initializing NVMe Controllers 00:08:34.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:34.366 controller IO queue size 128 less than required 00:08:34.366 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:08:34.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:08:34.366 Initialization complete. Launching workers. 00:08:34.366 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28423 00:08:34.366 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28488, failed to submit 62 00:08:34.366 success 28427, unsuccessful 61, failed 0 00:08:34.366 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:34.367 rmmod nvme_tcp 00:08:34.367 rmmod nvme_fabrics 00:08:34.367 rmmod nvme_keyring 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2429703 ']' 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2429703 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2429703 ']' 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2429703 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2429703 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2429703' 00:08:34.367 killing process with pid 2429703 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2429703 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2429703 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:34.367 10:19:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.912 10:19:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:36.912 00:08:36.912 real 0m7.671s 00:08:36.912 user 0m11.135s 00:08:36.912 sys 0m2.698s 00:08:36.912 10:19:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.912 10:19:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:08:36.912 ************************************ 00:08:36.912 END TEST nvmf_abort 00:08:36.912 ************************************ 00:08:36.912 10:19:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:36.912 10:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:36.912 10:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.912 10:19:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:36.912 ************************************ 00:08:36.912 START TEST nvmf_ns_hotplug_stress 00:08:36.912 ************************************ 00:08:36.912 10:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:08:36.912 * Looking for test storage... 00:08:36.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:36.912 10:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:36.912 10:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:08:36.912 10:19:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:36.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.912 --rc genhtml_branch_coverage=1 00:08:36.912 --rc genhtml_function_coverage=1 00:08:36.912 --rc genhtml_legend=1 00:08:36.912 --rc geninfo_all_blocks=1 00:08:36.912 --rc geninfo_unexecuted_blocks=1 00:08:36.912 00:08:36.912 ' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:36.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.912 --rc genhtml_branch_coverage=1 00:08:36.912 --rc genhtml_function_coverage=1 00:08:36.912 --rc genhtml_legend=1 00:08:36.912 --rc geninfo_all_blocks=1 00:08:36.912 --rc geninfo_unexecuted_blocks=1 00:08:36.912 00:08:36.912 ' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:36.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.912 --rc genhtml_branch_coverage=1 00:08:36.912 --rc genhtml_function_coverage=1 00:08:36.912 --rc genhtml_legend=1 00:08:36.912 --rc geninfo_all_blocks=1 00:08:36.912 --rc geninfo_unexecuted_blocks=1 00:08:36.912 00:08:36.912 ' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:36.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.912 --rc genhtml_branch_coverage=1 00:08:36.912 --rc genhtml_function_coverage=1 00:08:36.912 --rc genhtml_legend=1 00:08:36.912 --rc geninfo_all_blocks=1 00:08:36.912 --rc geninfo_unexecuted_blocks=1 00:08:36.912 00:08:36.912 ' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:36.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:08:36.912 10:19:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:38.851 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:38.851 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:38.851 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:38.852 Found net devices under 0000:09:00.0: cvl_0_0 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:38.852 Found net devices under 0000:09:00.1: cvl_0_1 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:38.852 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:39.110 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.110 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:08:39.110 00:08:39.110 --- 10.0.0.2 ping statistics --- 00:08:39.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.110 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.110 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.110 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:08:39.110 00:08:39.110 --- 10.0.0.1 ping statistics --- 00:08:39.110 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.110 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2432055 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2432055 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2432055 ']' 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:39.110 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.110 [2024-12-09 10:19:11.406407] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:08:39.110 [2024-12-09 10:19:11.406500] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.110 [2024-12-09 10:19:11.484422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:39.110 [2024-12-09 10:19:11.544332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:39.110 [2024-12-09 10:19:11.544389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:39.110 [2024-12-09 10:19:11.544419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:39.110 [2024-12-09 10:19:11.544430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:39.110 [2024-12-09 10:19:11.544440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:39.110 [2024-12-09 10:19:11.546074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:39.110 [2024-12-09 10:19:11.546152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:39.110 [2024-12-09 10:19:11.546171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:39.368 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.368 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:08:39.368 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:39.368 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:39.368 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.368 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:39.368 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:39.368 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:39.626 [2024-12-09 10:19:11.937152] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:39.626 10:19:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:39.885 10:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.143 [2024-12-09 10:19:12.475943] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.143 10:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:40.400 10:19:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:40.658 Malloc0 00:08:40.658 10:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:40.916 Delay0 00:08:40.916 10:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:41.174 10:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:41.432 NULL1 00:08:41.690 10:19:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:41.948 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2432361 00:08:41.948 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:41.948 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:41.948 10:19:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.881 Read completed with error (sct=0, sc=11) 00:08:43.138 10:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:43.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.138 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:43.395 10:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:43.395 10:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:43.653 true 00:08:43.653 10:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:43.653 10:19:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:44.217 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:44.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:44.732 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:44.732 10:19:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:44.732 true 00:08:44.989 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:44.989 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:45.246 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:45.503 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:45.503 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:45.760 true 00:08:45.760 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:45.760 10:19:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:46.018 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:46.276 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:46.276 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:46.533 true 00:08:46.533 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:46.533 10:19:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:47.465 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:47.724 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:47.724 10:19:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:47.981 true 00:08:47.981 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:47.981 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:48.238 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.496 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:48.496 10:19:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:48.754 true 00:08:48.754 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:48.754 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:49.011 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:49.269 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:49.269 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:49.526 true 00:08:49.526 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:49.526 10:19:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:50.466 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:50.466 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:50.724 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:50.724 10:19:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:50.993 true 00:08:50.993 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:50.993 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:51.251 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.509 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:51.509 10:19:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:51.765 true 00:08:51.765 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:51.765 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:52.697 10:19:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:52.697 10:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:52.697 10:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:52.953 true 00:08:52.953 10:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:52.953 10:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.210 10:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.466 10:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:53.466 10:19:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:53.723 true 00:08:53.723 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:53.723 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:53.980 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:54.237 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:54.237 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:54.505 true 00:08:54.761 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:54.761 10:19:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:55.693 10:19:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:55.951 10:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:55.951 10:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:56.209 true 00:08:56.209 10:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:56.209 10:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:56.467 10:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.725 10:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:56.725 10:19:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:56.983 true 00:08:56.983 10:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:56.983 10:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:57.240 10:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:57.499 10:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:57.499 10:19:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:57.757 true 00:08:57.757 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:57.757 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:58.691 10:19:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.020 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:59.020 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:59.315 true 00:08:59.315 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:59.315 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:59.315 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:59.605 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:59.605 10:19:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:59.863 true 00:08:59.863 10:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:08:59.863 10:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:00.119 10:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.377 10:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:00.377 10:19:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:00.634 true 00:09:00.634 10:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:00.634 10:19:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:01.569 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.827 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:01.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:01.827 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:02.084 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:02.084 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:02.341 true 00:09:02.341 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:02.341 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.599 10:19:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:02.855 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:02.855 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:03.111 true 00:09:03.111 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:03.111 10:19:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.042 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.042 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:04.042 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:04.042 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:04.299 true 00:09:04.299 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:04.299 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:04.555 10:19:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:04.811 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:04.811 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:05.069 true 00:09:05.069 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:05.069 10:19:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.001 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:06.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.259 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:06.259 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:06.259 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:06.517 true 00:09:06.517 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:06.517 10:19:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:06.775 10:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:07.032 10:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:07.032 10:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:07.291 true 00:09:07.291 10:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:07.291 10:19:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.225 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:08.225 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:08.482 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:08.482 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:08.739 true 00:09:08.739 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:08.739 10:19:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:08.996 10:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:09.254 10:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:09.254 10:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:09.511 true 00:09:09.511 10:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:09.511 10:19:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.442 10:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:10.442 10:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:10.442 10:19:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:10.699 true 00:09:10.699 10:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:10.699 10:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:10.955 10:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:11.212 10:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:11.212 10:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:11.468 true 00:09:11.468 10:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:11.468 10:19:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.031 10:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:12.031 10:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:12.031 10:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:12.031 Initializing NVMe Controllers 00:09:12.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:12.031 Controller IO queue size 128, less than required. 00:09:12.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:12.031 Controller IO queue size 128, less than required. 00:09:12.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:12.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:12.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:12.031 Initialization complete. Launching workers. 00:09:12.031 ======================================================== 00:09:12.031 Latency(us) 00:09:12.031 Device Information : IOPS MiB/s Average min max 00:09:12.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 599.52 0.29 95164.61 3445.42 1012720.79 00:09:12.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9264.53 4.52 13815.68 2443.77 481987.82 00:09:12.031 ======================================================== 00:09:12.031 Total : 9864.05 4.82 18759.93 2443.77 1012720.79 00:09:12.031 00:09:12.288 true 00:09:12.288 10:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2432361 00:09:12.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2432361) - No such process 00:09:12.288 10:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2432361 00:09:12.288 10:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:12.545 10:19:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:13.110 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:13.110 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:13.110 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:13.110 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.110 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:13.110 null0 00:09:13.110 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.110 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.110 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:13.367 null1 00:09:13.367 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.367 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.367 10:19:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:13.624 null2 00:09:13.624 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.624 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.624 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:13.881 null3 00:09:13.881 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:13.881 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:13.881 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:14.138 null4 00:09:14.138 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.396 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.396 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:14.396 null5 00:09:14.655 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.655 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.655 10:19:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:14.912 null6 00:09:14.912 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:14.912 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:14.912 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:15.171 null7 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.171 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2436429 2436430 2436432 2436434 2436436 2436438 2436440 2436442 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.172 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.431 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.431 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.431 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.431 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.431 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.431 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.431 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:15.431 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.690 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.690 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.690 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:15.690 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.690 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.690 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:15.690 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.690 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.690 10:19:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:15.690 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:15.949 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:15.949 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:15.949 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:15.949 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:15.949 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.949 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:15.949 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:15.949 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.208 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:16.467 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:16.467 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:16.467 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.725 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:16.725 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:16.725 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:16.725 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:16.725 10:19:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:16.997 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.998 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.998 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:16.998 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:16.998 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:16.998 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.255 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.255 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.255 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.255 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.255 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.255 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.255 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.255 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:17.514 10:19:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:17.771 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.771 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:17.771 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:17.771 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:17.771 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:17.771 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:17.771 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:17.771 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.029 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.029 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.030 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.288 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:18.288 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:18.288 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:18.288 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:18.288 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:18.289 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:18.289 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:18.556 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:18.556 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.556 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.556 10:19:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:18.814 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.072 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.072 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.072 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.072 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.072 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.072 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.072 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.072 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.334 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.591 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.591 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:19.591 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:19.591 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:19.591 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:19.591 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:19.591 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:19.591 10:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:19.849 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.849 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.849 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:19.850 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.107 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.107 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.107 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.107 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.108 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.108 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:20.108 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.108 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.365 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.365 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.366 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:20.623 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.623 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.623 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:20.623 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:20.623 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:20.623 10:19:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:20.880 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:20.880 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.880 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:20.880 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:20.880 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:20.880 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:20.880 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:20.880 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:21.138 rmmod nvme_tcp 00:09:21.138 rmmod nvme_fabrics 00:09:21.138 rmmod nvme_keyring 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2432055 ']' 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2432055 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2432055 ']' 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2432055 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2432055 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2432055' 00:09:21.138 killing process with pid 2432055 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2432055 00:09:21.138 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2432055 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.396 10:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.940 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:23.940 00:09:23.940 real 0m46.954s 00:09:23.940 user 3m38.779s 00:09:23.940 sys 0m15.574s 00:09:23.940 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.940 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:23.940 ************************************ 00:09:23.940 END TEST nvmf_ns_hotplug_stress 00:09:23.940 ************************************ 00:09:23.940 10:19:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:23.940 10:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:23.940 10:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.940 10:19:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.940 ************************************ 00:09:23.940 START TEST nvmf_delete_subsystem 00:09:23.940 ************************************ 00:09:23.940 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:23.940 * Looking for test storage... 00:09:23.940 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.940 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:23.940 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:23.940 10:19:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.940 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:23.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.941 --rc genhtml_branch_coverage=1 00:09:23.941 --rc genhtml_function_coverage=1 00:09:23.941 --rc genhtml_legend=1 00:09:23.941 --rc geninfo_all_blocks=1 00:09:23.941 --rc geninfo_unexecuted_blocks=1 00:09:23.941 00:09:23.941 ' 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:23.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.941 --rc genhtml_branch_coverage=1 00:09:23.941 --rc genhtml_function_coverage=1 00:09:23.941 --rc genhtml_legend=1 00:09:23.941 --rc geninfo_all_blocks=1 00:09:23.941 --rc geninfo_unexecuted_blocks=1 00:09:23.941 00:09:23.941 ' 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:23.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.941 --rc genhtml_branch_coverage=1 00:09:23.941 --rc genhtml_function_coverage=1 00:09:23.941 --rc genhtml_legend=1 00:09:23.941 --rc geninfo_all_blocks=1 00:09:23.941 --rc geninfo_unexecuted_blocks=1 00:09:23.941 00:09:23.941 ' 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:23.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.941 --rc genhtml_branch_coverage=1 00:09:23.941 --rc genhtml_function_coverage=1 00:09:23.941 --rc genhtml_legend=1 00:09:23.941 --rc geninfo_all_blocks=1 00:09:23.941 --rc geninfo_unexecuted_blocks=1 00:09:23.941 00:09:23.941 ' 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:23.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:23.941 10:19:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:26.479 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:26.479 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:26.479 Found net devices under 0000:09:00.0: cvl_0_0 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:26.479 Found net devices under 0000:09:00.1: cvl_0_1 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.479 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:26.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:09:26.480 00:09:26.480 --- 10.0.0.2 ping statistics --- 00:09:26.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.480 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:09:26.480 00:09:26.480 --- 10.0.0.1 ping statistics --- 00:09:26.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.480 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2439336 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2439336 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2439336 ']' 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.480 [2024-12-09 10:19:58.517551] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:09:26.480 [2024-12-09 10:19:58.517640] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.480 [2024-12-09 10:19:58.587326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:26.480 [2024-12-09 10:19:58.640093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.480 [2024-12-09 10:19:58.640161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.480 [2024-12-09 10:19:58.640190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.480 [2024-12-09 10:19:58.640201] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.480 [2024-12-09 10:19:58.640210] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.480 [2024-12-09 10:19:58.641695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.480 [2024-12-09 10:19:58.641701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.480 [2024-12-09 10:19:58.789764] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.480 [2024-12-09 10:19:58.805980] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.480 NULL1 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.480 Delay0 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2439358 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:09:26.480 10:19:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:26.480 [2024-12-09 10:19:58.890874] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:29.004 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:29.004 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.004 10:20:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:29.004 Write completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 starting I/O failed: -6 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Write completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 starting I/O failed: -6 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 starting I/O failed: -6 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 starting I/O failed: -6 00:09:29.004 Write completed with error (sct=0, sc=8) 00:09:29.004 Write completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 starting I/O failed: -6 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Write completed with error (sct=0, sc=8) 00:09:29.004 Write completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 starting I/O failed: -6 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Write completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 starting I/O failed: -6 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Write completed with error (sct=0, sc=8) 00:09:29.004 Write completed with error (sct=0, sc=8) 00:09:29.004 starting I/O failed: -6 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 starting I/O failed: -6 00:09:29.004 Write completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Read completed with error (sct=0, sc=8) 00:09:29.004 Write completed with error (sct=0, sc=8) 00:09:29.004 starting I/O failed: -6 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 [2024-12-09 10:20:01.142590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffb70000c40 is same with the state(6) to be set 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 starting I/O failed: -6 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 [2024-12-09 10:20:01.143343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc682c0 is same with the state(6) to be set 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.005 Read completed with error (sct=0, sc=8) 00:09:29.005 Write completed with error (sct=0, sc=8) 00:09:29.935 [2024-12-09 10:20:02.111063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc699b0 is same with the state(6) to be set 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 [2024-12-09 10:20:02.141530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc68860 is same with the state(6) to be set 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 [2024-12-09 10:20:02.141730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc684a0 is same with the state(6) to be set 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 Read completed with error (sct=0, sc=8) 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.935 [2024-12-09 10:20:02.145427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffb7000d020 is same with the state(6) to be set 00:09:29.935 Write completed with error (sct=0, sc=8) 00:09:29.936 Write completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Write completed with error (sct=0, sc=8) 00:09:29.936 Write completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Write completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Write completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Write completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Read completed with error (sct=0, sc=8) 00:09:29.936 Write completed with error (sct=0, sc=8) 00:09:29.936 [2024-12-09 10:20:02.146181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffb7000d680 is same with the state(6) to be set 00:09:29.936 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.936 Initializing NVMe Controllers 00:09:29.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:29.936 Controller IO queue size 128, less than required. 00:09:29.936 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:29.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:29.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:29.936 Initialization complete. Launching workers. 00:09:29.936 ======================================================== 00:09:29.936 Latency(us) 00:09:29.936 Device Information : IOPS MiB/s Average min max 00:09:29.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.19 0.08 922297.31 408.62 2000615.94 00:09:29.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.66 0.08 892357.48 658.62 1011475.62 00:09:29.936 ======================================================== 00:09:29.936 Total : 337.85 0.16 907173.52 408.62 2000615.94 00:09:29.936 00:09:29.936 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:29.936 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2439358 00:09:29.936 [2024-12-09 10:20:02.146681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc699b0 (9): Bad file descriptor 00:09:29.936 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:29.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2439358 00:09:30.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2439358) - No such process 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2439358 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2439358 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2439358 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.538 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.539 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.539 [2024-12-09 10:20:02.663644] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.539 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.539 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.539 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.539 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:30.539 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.539 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2439775 00:09:30.539 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:30.539 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:30.539 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2439775 00:09:30.539 10:20:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:30.539 [2024-12-09 10:20:02.733754] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:30.825 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:30.825 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2439775 00:09:30.825 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.391 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.391 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2439775 00:09:31.391 10:20:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:31.956 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:31.956 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2439775 00:09:31.956 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:32.523 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.523 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2439775 00:09:32.523 10:20:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:32.785 10:20:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:32.785 10:20:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2439775 00:09:32.785 10:20:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:33.356 10:20:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.356 10:20:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2439775 00:09:33.356 10:20:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:33.613 Initializing NVMe Controllers 00:09:33.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:33.613 Controller IO queue size 128, less than required. 00:09:33.613 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:33.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:33.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:33.613 Initialization complete. Launching workers. 00:09:33.613 ======================================================== 00:09:33.613 Latency(us) 00:09:33.613 Device Information : IOPS MiB/s Average min max 00:09:33.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004507.58 1000227.45 1040774.93 00:09:33.613 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004388.43 1000168.38 1012429.92 00:09:33.613 ======================================================== 00:09:33.613 Total : 256.00 0.12 1004448.01 1000168.38 1040774.93 00:09:33.613 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2439775 00:09:33.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2439775) - No such process 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2439775 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:33.870 rmmod nvme_tcp 00:09:33.870 rmmod nvme_fabrics 00:09:33.870 rmmod nvme_keyring 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2439336 ']' 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2439336 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2439336 ']' 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2439336 00:09:33.870 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:09:33.871 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:33.871 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2439336 00:09:33.871 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:33.871 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:33.871 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2439336' 00:09:33.871 killing process with pid 2439336 00:09:33.871 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2439336 00:09:33.871 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2439336 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:34.130 10:20:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:36.665 00:09:36.665 real 0m12.716s 00:09:36.665 user 0m28.323s 00:09:36.665 sys 0m3.113s 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:36.665 ************************************ 00:09:36.665 END TEST nvmf_delete_subsystem 00:09:36.665 ************************************ 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.665 ************************************ 00:09:36.665 START TEST nvmf_host_management 00:09:36.665 ************************************ 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:36.665 * Looking for test storage... 00:09:36.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:36.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.665 --rc genhtml_branch_coverage=1 00:09:36.665 --rc genhtml_function_coverage=1 00:09:36.665 --rc genhtml_legend=1 00:09:36.665 --rc geninfo_all_blocks=1 00:09:36.665 --rc geninfo_unexecuted_blocks=1 00:09:36.665 00:09:36.665 ' 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:36.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.665 --rc genhtml_branch_coverage=1 00:09:36.665 --rc genhtml_function_coverage=1 00:09:36.665 --rc genhtml_legend=1 00:09:36.665 --rc geninfo_all_blocks=1 00:09:36.665 --rc geninfo_unexecuted_blocks=1 00:09:36.665 00:09:36.665 ' 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:36.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.665 --rc genhtml_branch_coverage=1 00:09:36.665 --rc genhtml_function_coverage=1 00:09:36.665 --rc genhtml_legend=1 00:09:36.665 --rc geninfo_all_blocks=1 00:09:36.665 --rc geninfo_unexecuted_blocks=1 00:09:36.665 00:09:36.665 ' 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:36.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.665 --rc genhtml_branch_coverage=1 00:09:36.665 --rc genhtml_function_coverage=1 00:09:36.665 --rc genhtml_legend=1 00:09:36.665 --rc geninfo_all_blocks=1 00:09:36.665 --rc geninfo_unexecuted_blocks=1 00:09:36.665 00:09:36.665 ' 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.665 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:36.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:09:36.666 10:20:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:38.572 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:38.572 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:38.572 Found net devices under 0000:09:00.0: cvl_0_0 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:38.572 Found net devices under 0000:09:00.1: cvl_0_1 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:38.572 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.573 10:20:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:38.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:09:38.832 00:09:38.832 --- 10.0.0.2 ping statistics --- 00:09:38.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.832 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:09:38.832 00:09:38.832 --- 10.0.0.1 ping statistics --- 00:09:38.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.832 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2442250 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2442250 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2442250 ']' 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.832 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:38.832 [2024-12-09 10:20:11.194405] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:09:38.833 [2024-12-09 10:20:11.194470] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.833 [2024-12-09 10:20:11.262488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.092 [2024-12-09 10:20:11.319830] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.092 [2024-12-09 10:20:11.319892] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.092 [2024-12-09 10:20:11.319906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.092 [2024-12-09 10:20:11.319917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.092 [2024-12-09 10:20:11.319925] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.092 [2024-12-09 10:20:11.321453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.092 [2024-12-09 10:20:11.321495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.092 [2024-12-09 10:20:11.321550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:39.093 [2024-12-09 10:20:11.321553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.093 [2024-12-09 10:20:11.472833] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.093 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.093 Malloc0 00:09:39.352 [2024-12-09 10:20:11.547721] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2442297 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2442297 /var/tmp/bdevperf.sock 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2442297 ']' 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:39.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:39.352 { 00:09:39.352 "params": { 00:09:39.352 "name": "Nvme$subsystem", 00:09:39.352 "trtype": "$TEST_TRANSPORT", 00:09:39.352 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.352 "adrfam": "ipv4", 00:09:39.352 "trsvcid": "$NVMF_PORT", 00:09:39.352 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.352 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.352 "hdgst": ${hdgst:-false}, 00:09:39.352 "ddgst": ${ddgst:-false} 00:09:39.352 }, 00:09:39.352 "method": "bdev_nvme_attach_controller" 00:09:39.352 } 00:09:39.352 EOF 00:09:39.352 )") 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:39.352 10:20:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:39.352 "params": { 00:09:39.352 "name": "Nvme0", 00:09:39.352 "trtype": "tcp", 00:09:39.352 "traddr": "10.0.0.2", 00:09:39.352 "adrfam": "ipv4", 00:09:39.352 "trsvcid": "4420", 00:09:39.352 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:39.352 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:39.352 "hdgst": false, 00:09:39.352 "ddgst": false 00:09:39.352 }, 00:09:39.352 "method": "bdev_nvme_attach_controller" 00:09:39.352 }' 00:09:39.352 [2024-12-09 10:20:11.632000] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:09:39.352 [2024-12-09 10:20:11.632077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442297 ] 00:09:39.352 [2024-12-09 10:20:11.701655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.352 [2024-12-09 10:20:11.761990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.921 Running I/O for 10 seconds... 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:39.922 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:40.184 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:40.184 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:40.184 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:40.184 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:40.184 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.185 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.185 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.185 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:09:40.185 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:09:40.185 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:40.185 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:40.185 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:40.185 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:40.185 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.185 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.185 [2024-12-09 10:20:12.470711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.470990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.471339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f445b0 is same with the state(6) to be set 00:09:40.185 [2024-12-09 10:20:12.474006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.185 [2024-12-09 10:20:12.474064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.185 [2024-12-09 10:20:12.474092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.185 [2024-12-09 10:20:12.474123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.185 [2024-12-09 10:20:12.474147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.185 [2024-12-09 10:20:12.474165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.185 [2024-12-09 10:20:12.474181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.185 [2024-12-09 10:20:12.474202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.185 [2024-12-09 10:20:12.474216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.185 [2024-12-09 10:20:12.474230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.185 [2024-12-09 10:20:12.474245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.185 [2024-12-09 10:20:12.474259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.185 [2024-12-09 10:20:12.474273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.185 [2024-12-09 10:20:12.474286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.185 [2024-12-09 10:20:12.474301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.185 [2024-12-09 10:20:12.474314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.185 [2024-12-09 10:20:12.474330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.185 [2024-12-09 10:20:12.474343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.185 [2024-12-09 10:20:12.474358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.185 [2024-12-09 10:20:12.474379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.185 [2024-12-09 10:20:12.474395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.185 [2024-12-09 10:20:12.474409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.185 [2024-12-09 10:20:12.474423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.474979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.474995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.186 [2024-12-09 10:20:12.475194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:40.186 [2024-12-09 10:20:12.475353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.186 [2024-12-09 10:20:12.475482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.186 [2024-12-09 10:20:12.475495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.186 [2024-12-09 10:20:12.475510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:40.187 [2024-12-09 10:20:12.475594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:40.187 [2024-12-09 10:20:12.475946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:40.187 [2024-12-09 10:20:12.475982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:09:40.187 [2024-12-09 10:20:12.477171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:40.187 task offset: 81920 on job bdev=Nvme0n1 fails 00:09:40.187 00:09:40.187 Latency(us) 00:09:40.187 [2024-12-09T09:20:12.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:40.187 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:40.187 Job: Nvme0n1 ended in about 0.40 seconds with error 00:09:40.187 Verification LBA range: start 0x0 length 0x400 00:09:40.187 Nvme0n1 : 0.40 1595.37 99.71 159.54 0.00 35420.73 2706.39 33981.63 00:09:40.187 [2024-12-09T09:20:12.628Z] =================================================================================================================== 00:09:40.187 [2024-12-09T09:20:12.628Z] Total : 1595.37 99.71 159.54 0.00 35420.73 2706.39 33981.63 00:09:40.187 [2024-12-09 10:20:12.479070] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:40.187 [2024-12-09 10:20:12.479112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a4660 (9): Bad file descriptor 00:09:40.187 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.187 10:20:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:40.187 [2024-12-09 10:20:12.489604] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:41.125 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2442297 00:09:41.125 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2442297) - No such process 00:09:41.125 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:41.125 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:41.125 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:41.125 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:41.125 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:41.126 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:41.126 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:41.126 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:41.126 { 00:09:41.126 "params": { 00:09:41.126 "name": "Nvme$subsystem", 00:09:41.126 "trtype": "$TEST_TRANSPORT", 00:09:41.126 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.126 "adrfam": "ipv4", 00:09:41.126 "trsvcid": "$NVMF_PORT", 00:09:41.126 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.126 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.126 "hdgst": ${hdgst:-false}, 00:09:41.126 "ddgst": ${ddgst:-false} 00:09:41.126 }, 00:09:41.126 "method": "bdev_nvme_attach_controller" 00:09:41.126 } 00:09:41.126 EOF 00:09:41.126 )") 00:09:41.126 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:41.126 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:41.126 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:41.126 10:20:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:41.126 "params": { 00:09:41.126 "name": "Nvme0", 00:09:41.126 "trtype": "tcp", 00:09:41.126 "traddr": "10.0.0.2", 00:09:41.126 "adrfam": "ipv4", 00:09:41.126 "trsvcid": "4420", 00:09:41.126 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:41.126 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:41.126 "hdgst": false, 00:09:41.126 "ddgst": false 00:09:41.126 }, 00:09:41.126 "method": "bdev_nvme_attach_controller" 00:09:41.126 }' 00:09:41.126 [2024-12-09 10:20:13.535782] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:09:41.126 [2024-12-09 10:20:13.535854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2442579 ] 00:09:41.386 [2024-12-09 10:20:13.604359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.386 [2024-12-09 10:20:13.664827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.647 Running I/O for 1 seconds... 00:09:42.585 1664.00 IOPS, 104.00 MiB/s 00:09:42.585 Latency(us) 00:09:42.585 [2024-12-09T09:20:15.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.585 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:42.585 Verification LBA range: start 0x0 length 0x400 00:09:42.585 Nvme0n1 : 1.01 1710.42 106.90 0.00 0.00 36804.02 6213.78 33204.91 00:09:42.585 [2024-12-09T09:20:15.026Z] =================================================================================================================== 00:09:42.585 [2024-12-09T09:20:15.026Z] Total : 1710.42 106.90 0.00 0.00 36804.02 6213.78 33204.91 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:42.843 rmmod nvme_tcp 00:09:42.843 rmmod nvme_fabrics 00:09:42.843 rmmod nvme_keyring 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2442250 ']' 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2442250 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2442250 ']' 00:09:42.843 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2442250 00:09:42.844 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:42.844 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.844 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2442250 00:09:42.844 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:42.844 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:42.844 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2442250' 00:09:42.844 killing process with pid 2442250 00:09:42.844 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2442250 00:09:42.844 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2442250 00:09:43.101 [2024-12-09 10:20:15.503095] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.101 10:20:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:45.646 00:09:45.646 real 0m8.932s 00:09:45.646 user 0m19.912s 00:09:45.646 sys 0m2.758s 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:45.646 ************************************ 00:09:45.646 END TEST nvmf_host_management 00:09:45.646 ************************************ 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.646 ************************************ 00:09:45.646 START TEST nvmf_lvol 00:09:45.646 ************************************ 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:45.646 * Looking for test storage... 00:09:45.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.646 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.647 --rc genhtml_branch_coverage=1 00:09:45.647 --rc genhtml_function_coverage=1 00:09:45.647 --rc genhtml_legend=1 00:09:45.647 --rc geninfo_all_blocks=1 00:09:45.647 --rc geninfo_unexecuted_blocks=1 00:09:45.647 00:09:45.647 ' 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.647 --rc genhtml_branch_coverage=1 00:09:45.647 --rc genhtml_function_coverage=1 00:09:45.647 --rc genhtml_legend=1 00:09:45.647 --rc geninfo_all_blocks=1 00:09:45.647 --rc geninfo_unexecuted_blocks=1 00:09:45.647 00:09:45.647 ' 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.647 --rc genhtml_branch_coverage=1 00:09:45.647 --rc genhtml_function_coverage=1 00:09:45.647 --rc genhtml_legend=1 00:09:45.647 --rc geninfo_all_blocks=1 00:09:45.647 --rc geninfo_unexecuted_blocks=1 00:09:45.647 00:09:45.647 ' 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.647 --rc genhtml_branch_coverage=1 00:09:45.647 --rc genhtml_function_coverage=1 00:09:45.647 --rc genhtml_legend=1 00:09:45.647 --rc geninfo_all_blocks=1 00:09:45.647 --rc geninfo_unexecuted_blocks=1 00:09:45.647 00:09:45.647 ' 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:45.647 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:45.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:45.648 10:20:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:47.559 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:47.559 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:47.559 Found net devices under 0000:09:00.0: cvl_0_0 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:47.559 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:47.560 Found net devices under 0000:09:00.1: cvl_0_1 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.560 10:20:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:47.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:09:47.821 00:09:47.821 --- 10.0.0.2 ping statistics --- 00:09:47.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.821 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:09:47.821 00:09:47.821 --- 10.0.0.1 ping statistics --- 00:09:47.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.821 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2444794 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2444794 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2444794 ']' 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.821 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:47.821 [2024-12-09 10:20:20.141271] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:09:47.821 [2024-12-09 10:20:20.141362] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.821 [2024-12-09 10:20:20.216890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:48.080 [2024-12-09 10:20:20.278992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.080 [2024-12-09 10:20:20.279048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.080 [2024-12-09 10:20:20.279069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.080 [2024-12-09 10:20:20.279087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.080 [2024-12-09 10:20:20.279102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.080 [2024-12-09 10:20:20.280759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.080 [2024-12-09 10:20:20.280786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.080 [2024-12-09 10:20:20.280790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.080 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.080 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:48.080 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:48.080 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:48.080 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:48.080 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.080 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:48.339 [2024-12-09 10:20:20.695506] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.339 10:20:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.629 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:48.629 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:48.887 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:48.887 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:49.145 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:49.714 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3ba1bfd1-8c19-4c28-b05d-c7ac7a28175b 00:09:49.714 10:20:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3ba1bfd1-8c19-4c28-b05d-c7ac7a28175b lvol 20 00:09:49.714 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=98b05406-ee9c-4fa4-bf17-c3cbfd48c3af 00:09:49.714 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:50.282 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 98b05406-ee9c-4fa4-bf17-c3cbfd48c3af 00:09:50.282 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:50.539 [2024-12-09 10:20:22.920869] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.539 10:20:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:50.798 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2445106 00:09:50.798 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:50.798 10:20:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:52.176 10:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 98b05406-ee9c-4fa4-bf17-c3cbfd48c3af MY_SNAPSHOT 00:09:52.176 10:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2632fc23-24a0-42d6-9822-3e0c0cf64225 00:09:52.176 10:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 98b05406-ee9c-4fa4-bf17-c3cbfd48c3af 30 00:09:52.433 10:20:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2632fc23-24a0-42d6-9822-3e0c0cf64225 MY_CLONE 00:09:53.000 10:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fa4583f7-2686-453b-9482-3ba7a1491ab8 00:09:53.000 10:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fa4583f7-2686-453b-9482-3ba7a1491ab8 00:09:53.566 10:20:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2445106 00:10:01.687 Initializing NVMe Controllers 00:10:01.687 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:01.687 Controller IO queue size 128, less than required. 00:10:01.687 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:01.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:01.687 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:01.687 Initialization complete. Launching workers. 00:10:01.687 ======================================================== 00:10:01.687 Latency(us) 00:10:01.687 Device Information : IOPS MiB/s Average min max 00:10:01.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10363.30 40.48 12353.96 2200.54 119848.03 00:10:01.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10283.50 40.17 12451.57 2276.88 50788.69 00:10:01.687 ======================================================== 00:10:01.687 Total : 20646.80 80.65 12402.58 2200.54 119848.03 00:10:01.687 00:10:01.687 10:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:01.687 10:20:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 98b05406-ee9c-4fa4-bf17-c3cbfd48c3af 00:10:01.944 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3ba1bfd1-8c19-4c28-b05d-c7ac7a28175b 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:02.203 rmmod nvme_tcp 00:10:02.203 rmmod nvme_fabrics 00:10:02.203 rmmod nvme_keyring 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2444794 ']' 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2444794 00:10:02.203 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2444794 ']' 00:10:02.204 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2444794 00:10:02.204 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:02.204 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:02.204 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2444794 00:10:02.204 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:02.204 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:02.204 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2444794' 00:10:02.204 killing process with pid 2444794 00:10:02.204 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2444794 00:10:02.204 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2444794 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.463 10:20:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.004 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:05.004 00:10:05.004 real 0m19.309s 00:10:05.004 user 1m6.057s 00:10:05.004 sys 0m5.355s 00:10:05.004 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.004 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:05.004 ************************************ 00:10:05.004 END TEST nvmf_lvol 00:10:05.004 ************************************ 00:10:05.004 10:20:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:05.004 10:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:05.004 10:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.004 10:20:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:05.004 ************************************ 00:10:05.004 START TEST nvmf_lvs_grow 00:10:05.004 ************************************ 00:10:05.004 10:20:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:05.004 * Looking for test storage... 00:10:05.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.004 --rc genhtml_branch_coverage=1 00:10:05.004 --rc genhtml_function_coverage=1 00:10:05.004 --rc genhtml_legend=1 00:10:05.004 --rc geninfo_all_blocks=1 00:10:05.004 --rc geninfo_unexecuted_blocks=1 00:10:05.004 00:10:05.004 ' 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.004 --rc genhtml_branch_coverage=1 00:10:05.004 --rc genhtml_function_coverage=1 00:10:05.004 --rc genhtml_legend=1 00:10:05.004 --rc geninfo_all_blocks=1 00:10:05.004 --rc geninfo_unexecuted_blocks=1 00:10:05.004 00:10:05.004 ' 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.004 --rc genhtml_branch_coverage=1 00:10:05.004 --rc genhtml_function_coverage=1 00:10:05.004 --rc genhtml_legend=1 00:10:05.004 --rc geninfo_all_blocks=1 00:10:05.004 --rc geninfo_unexecuted_blocks=1 00:10:05.004 00:10:05.004 ' 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:05.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.004 --rc genhtml_branch_coverage=1 00:10:05.004 --rc genhtml_function_coverage=1 00:10:05.004 --rc genhtml_legend=1 00:10:05.004 --rc geninfo_all_blocks=1 00:10:05.004 --rc geninfo_unexecuted_blocks=1 00:10:05.004 00:10:05.004 ' 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:05.004 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:05.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:05.005 10:20:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:06.914 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:06.914 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:06.914 Found net devices under 0000:09:00.0: cvl_0_0 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:06.914 Found net devices under 0000:09:00.1: cvl_0_1 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:06.914 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.915 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:07.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:07.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:10:07.174 00:10:07.174 --- 10.0.0.2 ping statistics --- 00:10:07.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.174 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:07.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:07.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:10:07.174 00:10:07.174 --- 10.0.0.1 ping statistics --- 00:10:07.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:07.174 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:07.174 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2448507 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2448507 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2448507 ']' 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.175 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:07.175 [2024-12-09 10:20:39.528175] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:10:07.175 [2024-12-09 10:20:39.528244] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:07.175 [2024-12-09 10:20:39.598999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.433 [2024-12-09 10:20:39.659075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:07.433 [2024-12-09 10:20:39.659151] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:07.433 [2024-12-09 10:20:39.659168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:07.433 [2024-12-09 10:20:39.659204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:07.433 [2024-12-09 10:20:39.659215] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:07.433 [2024-12-09 10:20:39.659868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.433 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.433 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:07.433 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:07.433 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:07.433 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:07.433 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:07.433 10:20:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:07.691 [2024-12-09 10:20:40.060335] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:07.691 ************************************ 00:10:07.691 START TEST lvs_grow_clean 00:10:07.691 ************************************ 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:07.691 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:08.278 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:08.278 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:08.278 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:08.278 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:08.278 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:08.847 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:08.847 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:08.847 10:20:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d80d6953-06a7-406a-8763-2bd19c09a06d lvol 150 00:10:08.847 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=89442ddb-addc-48b4-ae81-b706509cc811 00:10:08.847 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:08.847 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:09.135 [2024-12-09 10:20:41.517639] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:09.135 [2024-12-09 10:20:41.517746] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:09.135 true 00:10:09.135 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:09.135 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:09.420 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:09.420 10:20:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:09.678 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 89442ddb-addc-48b4-ae81-b706509cc811 00:10:09.935 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:10.192 [2024-12-09 10:20:42.600894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:10.192 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:10.450 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2448955 00:10:10.450 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:10.450 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:10.450 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2448955 /var/tmp/bdevperf.sock 00:10:10.450 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2448955 ']' 00:10:10.450 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:10.450 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.450 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:10.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:10.450 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.450 10:20:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:10.708 [2024-12-09 10:20:42.928231] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:10:10.708 [2024-12-09 10:20:42.928301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2448955 ] 00:10:10.708 [2024-12-09 10:20:42.993375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.708 [2024-12-09 10:20:43.052285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:10.967 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.967 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:10.967 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:11.225 Nvme0n1 00:10:11.225 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:11.484 [ 00:10:11.484 { 00:10:11.484 "name": "Nvme0n1", 00:10:11.484 "aliases": [ 00:10:11.484 "89442ddb-addc-48b4-ae81-b706509cc811" 00:10:11.484 ], 00:10:11.484 "product_name": "NVMe disk", 00:10:11.484 "block_size": 4096, 00:10:11.484 "num_blocks": 38912, 00:10:11.484 "uuid": "89442ddb-addc-48b4-ae81-b706509cc811", 00:10:11.484 "numa_id": 0, 00:10:11.484 "assigned_rate_limits": { 00:10:11.484 "rw_ios_per_sec": 0, 00:10:11.484 "rw_mbytes_per_sec": 0, 00:10:11.484 "r_mbytes_per_sec": 0, 00:10:11.484 "w_mbytes_per_sec": 0 00:10:11.484 }, 00:10:11.484 "claimed": false, 00:10:11.484 "zoned": false, 00:10:11.484 "supported_io_types": { 00:10:11.484 "read": true, 00:10:11.484 "write": true, 00:10:11.484 "unmap": true, 00:10:11.484 "flush": true, 00:10:11.484 "reset": true, 00:10:11.484 "nvme_admin": true, 00:10:11.484 "nvme_io": true, 00:10:11.484 "nvme_io_md": false, 00:10:11.484 "write_zeroes": true, 00:10:11.484 "zcopy": false, 00:10:11.484 "get_zone_info": false, 00:10:11.484 "zone_management": false, 00:10:11.484 "zone_append": false, 00:10:11.484 "compare": true, 00:10:11.484 "compare_and_write": true, 00:10:11.484 "abort": true, 00:10:11.484 "seek_hole": false, 00:10:11.484 "seek_data": false, 00:10:11.484 "copy": true, 00:10:11.484 "nvme_iov_md": false 00:10:11.484 }, 00:10:11.484 "memory_domains": [ 00:10:11.484 { 00:10:11.484 "dma_device_id": "system", 00:10:11.484 "dma_device_type": 1 00:10:11.484 } 00:10:11.484 ], 00:10:11.484 "driver_specific": { 00:10:11.484 "nvme": [ 00:10:11.484 { 00:10:11.484 "trid": { 00:10:11.484 "trtype": "TCP", 00:10:11.484 "adrfam": "IPv4", 00:10:11.484 "traddr": "10.0.0.2", 00:10:11.484 "trsvcid": "4420", 00:10:11.484 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:11.484 }, 00:10:11.484 "ctrlr_data": { 00:10:11.484 "cntlid": 1, 00:10:11.484 "vendor_id": "0x8086", 00:10:11.484 "model_number": "SPDK bdev Controller", 00:10:11.484 "serial_number": "SPDK0", 00:10:11.484 "firmware_revision": "25.01", 00:10:11.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:11.484 "oacs": { 00:10:11.484 "security": 0, 00:10:11.484 "format": 0, 00:10:11.484 "firmware": 0, 00:10:11.484 "ns_manage": 0 00:10:11.484 }, 00:10:11.484 "multi_ctrlr": true, 00:10:11.484 "ana_reporting": false 00:10:11.484 }, 00:10:11.484 "vs": { 00:10:11.484 "nvme_version": "1.3" 00:10:11.484 }, 00:10:11.484 "ns_data": { 00:10:11.484 "id": 1, 00:10:11.484 "can_share": true 00:10:11.484 } 00:10:11.484 } 00:10:11.484 ], 00:10:11.484 "mp_policy": "active_passive" 00:10:11.484 } 00:10:11.484 } 00:10:11.484 ] 00:10:11.484 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2449090 00:10:11.484 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:11.484 10:20:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:11.743 Running I/O for 10 seconds... 00:10:12.694 Latency(us) 00:10:12.694 [2024-12-09T09:20:45.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:12.694 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.694 Nvme0n1 : 1.00 15123.00 59.07 0.00 0.00 0.00 0.00 0.00 00:10:12.694 [2024-12-09T09:20:45.135Z] =================================================================================================================== 00:10:12.694 [2024-12-09T09:20:45.135Z] Total : 15123.00 59.07 0.00 0.00 0.00 0.00 0.00 00:10:12.694 00:10:13.632 10:20:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:13.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.633 Nvme0n1 : 2.00 15285.50 59.71 0.00 0.00 0.00 0.00 0.00 00:10:13.633 [2024-12-09T09:20:46.074Z] =================================================================================================================== 00:10:13.633 [2024-12-09T09:20:46.074Z] Total : 15285.50 59.71 0.00 0.00 0.00 0.00 0.00 00:10:13.633 00:10:13.891 true 00:10:13.891 10:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:13.891 10:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:14.149 10:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:14.149 10:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:14.149 10:20:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2449090 00:10:14.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.716 Nvme0n1 : 3.00 15357.00 59.99 0.00 0.00 0.00 0.00 0.00 00:10:14.716 [2024-12-09T09:20:47.157Z] =================================================================================================================== 00:10:14.716 [2024-12-09T09:20:47.157Z] Total : 15357.00 59.99 0.00 0.00 0.00 0.00 0.00 00:10:14.716 00:10:15.649 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:15.649 Nvme0n1 : 4.00 15440.75 60.32 0.00 0.00 0.00 0.00 0.00 00:10:15.649 [2024-12-09T09:20:48.090Z] =================================================================================================================== 00:10:15.649 [2024-12-09T09:20:48.090Z] Total : 15440.75 60.32 0.00 0.00 0.00 0.00 0.00 00:10:15.649 00:10:16.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:16.584 Nvme0n1 : 5.00 15529.40 60.66 0.00 0.00 0.00 0.00 0.00 00:10:16.584 [2024-12-09T09:20:49.025Z] =================================================================================================================== 00:10:16.584 [2024-12-09T09:20:49.025Z] Total : 15529.40 60.66 0.00 0.00 0.00 0.00 0.00 00:10:16.584 00:10:17.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:17.957 Nvme0n1 : 6.00 15577.33 60.85 0.00 0.00 0.00 0.00 0.00 00:10:17.957 [2024-12-09T09:20:50.398Z] =================================================================================================================== 00:10:17.957 [2024-12-09T09:20:50.398Z] Total : 15577.33 60.85 0.00 0.00 0.00 0.00 0.00 00:10:17.957 00:10:18.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:18.888 Nvme0n1 : 7.00 15594.71 60.92 0.00 0.00 0.00 0.00 0.00 00:10:18.888 [2024-12-09T09:20:51.329Z] =================================================================================================================== 00:10:18.888 [2024-12-09T09:20:51.329Z] Total : 15594.71 60.92 0.00 0.00 0.00 0.00 0.00 00:10:18.888 00:10:19.823 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:19.823 Nvme0n1 : 8.00 15640.38 61.10 0.00 0.00 0.00 0.00 0.00 00:10:19.823 [2024-12-09T09:20:52.264Z] =================================================================================================================== 00:10:19.823 [2024-12-09T09:20:52.264Z] Total : 15640.38 61.10 0.00 0.00 0.00 0.00 0.00 00:10:19.823 00:10:20.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:20.758 Nvme0n1 : 9.00 15680.89 61.25 0.00 0.00 0.00 0.00 0.00 00:10:20.758 [2024-12-09T09:20:53.199Z] =================================================================================================================== 00:10:20.758 [2024-12-09T09:20:53.199Z] Total : 15680.89 61.25 0.00 0.00 0.00 0.00 0.00 00:10:20.758 00:10:21.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.693 Nvme0n1 : 10.00 15717.40 61.40 0.00 0.00 0.00 0.00 0.00 00:10:21.693 [2024-12-09T09:20:54.134Z] =================================================================================================================== 00:10:21.693 [2024-12-09T09:20:54.134Z] Total : 15717.40 61.40 0.00 0.00 0.00 0.00 0.00 00:10:21.693 00:10:21.693 00:10:21.693 Latency(us) 00:10:21.693 [2024-12-09T09:20:54.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:21.693 Nvme0n1 : 10.00 15722.19 61.41 0.00 0.00 8136.26 2767.08 15922.82 00:10:21.693 [2024-12-09T09:20:54.134Z] =================================================================================================================== 00:10:21.693 [2024-12-09T09:20:54.134Z] Total : 15722.19 61.41 0.00 0.00 8136.26 2767.08 15922.82 00:10:21.693 { 00:10:21.693 "results": [ 00:10:21.693 { 00:10:21.693 "job": "Nvme0n1", 00:10:21.693 "core_mask": "0x2", 00:10:21.693 "workload": "randwrite", 00:10:21.693 "status": "finished", 00:10:21.693 "queue_depth": 128, 00:10:21.693 "io_size": 4096, 00:10:21.693 "runtime": 10.002929, 00:10:21.693 "iops": 15722.194969093553, 00:10:21.693 "mibps": 61.41482409802169, 00:10:21.693 "io_failed": 0, 00:10:21.693 "io_timeout": 0, 00:10:21.693 "avg_latency_us": 8136.26300126512, 00:10:21.693 "min_latency_us": 2767.0755555555556, 00:10:21.693 "max_latency_us": 15922.82074074074 00:10:21.693 } 00:10:21.693 ], 00:10:21.693 "core_count": 1 00:10:21.693 } 00:10:21.693 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2448955 00:10:21.693 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2448955 ']' 00:10:21.693 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2448955 00:10:21.693 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:21.693 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.693 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2448955 00:10:21.693 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:21.693 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:21.693 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2448955' 00:10:21.693 killing process with pid 2448955 00:10:21.693 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2448955 00:10:21.693 Received shutdown signal, test time was about 10.000000 seconds 00:10:21.693 00:10:21.693 Latency(us) 00:10:21.693 [2024-12-09T09:20:54.134Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:21.693 [2024-12-09T09:20:54.134Z] =================================================================================================================== 00:10:21.693 [2024-12-09T09:20:54.134Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:21.693 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2448955 00:10:21.951 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:22.209 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:22.467 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:22.467 10:20:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:22.725 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:22.725 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:22.725 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:22.983 [2024-12-09 10:20:55.394015] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:22.983 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:22.983 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:22.983 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:22.983 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:22.983 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:22.983 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.240 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.240 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.241 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:23.241 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:23.241 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:23.241 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:23.241 request: 00:10:23.241 { 00:10:23.241 "uuid": "d80d6953-06a7-406a-8763-2bd19c09a06d", 00:10:23.241 "method": "bdev_lvol_get_lvstores", 00:10:23.241 "req_id": 1 00:10:23.241 } 00:10:23.241 Got JSON-RPC error response 00:10:23.241 response: 00:10:23.241 { 00:10:23.241 "code": -19, 00:10:23.241 "message": "No such device" 00:10:23.241 } 00:10:23.498 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:23.498 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:23.498 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:23.498 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:23.498 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:23.498 aio_bdev 00:10:23.756 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 89442ddb-addc-48b4-ae81-b706509cc811 00:10:23.756 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=89442ddb-addc-48b4-ae81-b706509cc811 00:10:23.756 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:23.756 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:23.756 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:23.756 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:23.756 10:20:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:24.012 10:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 89442ddb-addc-48b4-ae81-b706509cc811 -t 2000 00:10:24.269 [ 00:10:24.269 { 00:10:24.269 "name": "89442ddb-addc-48b4-ae81-b706509cc811", 00:10:24.269 "aliases": [ 00:10:24.269 "lvs/lvol" 00:10:24.269 ], 00:10:24.269 "product_name": "Logical Volume", 00:10:24.269 "block_size": 4096, 00:10:24.269 "num_blocks": 38912, 00:10:24.269 "uuid": "89442ddb-addc-48b4-ae81-b706509cc811", 00:10:24.269 "assigned_rate_limits": { 00:10:24.269 "rw_ios_per_sec": 0, 00:10:24.269 "rw_mbytes_per_sec": 0, 00:10:24.269 "r_mbytes_per_sec": 0, 00:10:24.269 "w_mbytes_per_sec": 0 00:10:24.269 }, 00:10:24.269 "claimed": false, 00:10:24.269 "zoned": false, 00:10:24.269 "supported_io_types": { 00:10:24.269 "read": true, 00:10:24.269 "write": true, 00:10:24.269 "unmap": true, 00:10:24.269 "flush": false, 00:10:24.269 "reset": true, 00:10:24.269 "nvme_admin": false, 00:10:24.269 "nvme_io": false, 00:10:24.269 "nvme_io_md": false, 00:10:24.269 "write_zeroes": true, 00:10:24.269 "zcopy": false, 00:10:24.269 "get_zone_info": false, 00:10:24.269 "zone_management": false, 00:10:24.269 "zone_append": false, 00:10:24.269 "compare": false, 00:10:24.269 "compare_and_write": false, 00:10:24.269 "abort": false, 00:10:24.269 "seek_hole": true, 00:10:24.269 "seek_data": true, 00:10:24.269 "copy": false, 00:10:24.269 "nvme_iov_md": false 00:10:24.269 }, 00:10:24.269 "driver_specific": { 00:10:24.269 "lvol": { 00:10:24.269 "lvol_store_uuid": "d80d6953-06a7-406a-8763-2bd19c09a06d", 00:10:24.269 "base_bdev": "aio_bdev", 00:10:24.269 "thin_provision": false, 00:10:24.269 "num_allocated_clusters": 38, 00:10:24.269 "snapshot": false, 00:10:24.269 "clone": false, 00:10:24.269 "esnap_clone": false 00:10:24.269 } 00:10:24.269 } 00:10:24.269 } 00:10:24.269 ] 00:10:24.269 10:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:24.269 10:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:24.269 10:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:24.527 10:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:24.527 10:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:24.527 10:20:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:24.784 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:24.784 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 89442ddb-addc-48b4-ae81-b706509cc811 00:10:25.041 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d80d6953-06a7-406a-8763-2bd19c09a06d 00:10:25.298 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:25.555 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:25.555 00:10:25.555 real 0m17.756s 00:10:25.555 user 0m16.454s 00:10:25.555 sys 0m2.210s 00:10:25.555 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.555 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:25.555 ************************************ 00:10:25.555 END TEST lvs_grow_clean 00:10:25.555 ************************************ 00:10:25.555 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:25.555 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:25.555 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.555 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:25.555 ************************************ 00:10:25.555 START TEST lvs_grow_dirty 00:10:25.555 ************************************ 00:10:25.555 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:25.555 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:25.555 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:25.555 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:25.556 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:25.556 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:25.556 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:25.556 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:25.556 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:25.556 10:20:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:25.813 10:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:25.813 10:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:26.070 10:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:26.070 10:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:26.070 10:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:26.328 10:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:26.328 10:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:26.328 10:20:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f lvol 150 00:10:26.585 10:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=617b348a-d012-46f9-a04e-479be6839ada 00:10:26.585 10:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:26.585 10:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:26.841 [2024-12-09 10:20:59.277572] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:26.841 [2024-12-09 10:20:59.277672] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:26.841 true 00:10:27.098 10:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:27.098 10:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:27.356 10:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:27.356 10:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:27.614 10:20:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 617b348a-d012-46f9-a04e-479be6839ada 00:10:27.873 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:28.131 [2024-12-09 10:21:00.396882] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.131 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:28.390 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2451136 00:10:28.390 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:28.390 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:28.390 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2451136 /var/tmp/bdevperf.sock 00:10:28.390 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2451136 ']' 00:10:28.390 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:28.390 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.390 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:28.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:28.390 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.390 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:28.390 [2024-12-09 10:21:00.738137] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:10:28.390 [2024-12-09 10:21:00.738246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2451136 ] 00:10:28.390 [2024-12-09 10:21:00.805707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.648 [2024-12-09 10:21:00.866389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.648 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.648 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:28.648 10:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:28.906 Nvme0n1 00:10:28.906 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:29.164 [ 00:10:29.164 { 00:10:29.164 "name": "Nvme0n1", 00:10:29.164 "aliases": [ 00:10:29.164 "617b348a-d012-46f9-a04e-479be6839ada" 00:10:29.164 ], 00:10:29.164 "product_name": "NVMe disk", 00:10:29.164 "block_size": 4096, 00:10:29.164 "num_blocks": 38912, 00:10:29.164 "uuid": "617b348a-d012-46f9-a04e-479be6839ada", 00:10:29.164 "numa_id": 0, 00:10:29.164 "assigned_rate_limits": { 00:10:29.164 "rw_ios_per_sec": 0, 00:10:29.164 "rw_mbytes_per_sec": 0, 00:10:29.164 "r_mbytes_per_sec": 0, 00:10:29.164 "w_mbytes_per_sec": 0 00:10:29.164 }, 00:10:29.164 "claimed": false, 00:10:29.164 "zoned": false, 00:10:29.164 "supported_io_types": { 00:10:29.164 "read": true, 00:10:29.164 "write": true, 00:10:29.164 "unmap": true, 00:10:29.164 "flush": true, 00:10:29.164 "reset": true, 00:10:29.164 "nvme_admin": true, 00:10:29.164 "nvme_io": true, 00:10:29.164 "nvme_io_md": false, 00:10:29.164 "write_zeroes": true, 00:10:29.164 "zcopy": false, 00:10:29.164 "get_zone_info": false, 00:10:29.164 "zone_management": false, 00:10:29.164 "zone_append": false, 00:10:29.164 "compare": true, 00:10:29.164 "compare_and_write": true, 00:10:29.164 "abort": true, 00:10:29.164 "seek_hole": false, 00:10:29.164 "seek_data": false, 00:10:29.164 "copy": true, 00:10:29.164 "nvme_iov_md": false 00:10:29.164 }, 00:10:29.164 "memory_domains": [ 00:10:29.164 { 00:10:29.164 "dma_device_id": "system", 00:10:29.164 "dma_device_type": 1 00:10:29.164 } 00:10:29.164 ], 00:10:29.164 "driver_specific": { 00:10:29.164 "nvme": [ 00:10:29.164 { 00:10:29.164 "trid": { 00:10:29.164 "trtype": "TCP", 00:10:29.164 "adrfam": "IPv4", 00:10:29.164 "traddr": "10.0.0.2", 00:10:29.164 "trsvcid": "4420", 00:10:29.164 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:29.164 }, 00:10:29.164 "ctrlr_data": { 00:10:29.164 "cntlid": 1, 00:10:29.164 "vendor_id": "0x8086", 00:10:29.164 "model_number": "SPDK bdev Controller", 00:10:29.164 "serial_number": "SPDK0", 00:10:29.164 "firmware_revision": "25.01", 00:10:29.164 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:29.164 "oacs": { 00:10:29.164 "security": 0, 00:10:29.164 "format": 0, 00:10:29.164 "firmware": 0, 00:10:29.164 "ns_manage": 0 00:10:29.164 }, 00:10:29.164 "multi_ctrlr": true, 00:10:29.164 "ana_reporting": false 00:10:29.164 }, 00:10:29.164 "vs": { 00:10:29.164 "nvme_version": "1.3" 00:10:29.164 }, 00:10:29.164 "ns_data": { 00:10:29.164 "id": 1, 00:10:29.164 "can_share": true 00:10:29.164 } 00:10:29.164 } 00:10:29.164 ], 00:10:29.164 "mp_policy": "active_passive" 00:10:29.164 } 00:10:29.164 } 00:10:29.164 ] 00:10:29.423 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2451254 00:10:29.423 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:29.423 10:21:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:29.423 Running I/O for 10 seconds... 00:10:30.355 Latency(us) 00:10:30.355 [2024-12-09T09:21:02.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:30.355 Nvme0n1 : 1.00 13220.00 51.64 0.00 0.00 0.00 0.00 0.00 00:10:30.355 [2024-12-09T09:21:02.796Z] =================================================================================================================== 00:10:30.355 [2024-12-09T09:21:02.796Z] Total : 13220.00 51.64 0.00 0.00 0.00 0.00 0.00 00:10:30.355 00:10:31.290 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:31.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:31.290 Nvme0n1 : 2.00 13330.00 52.07 0.00 0.00 0.00 0.00 0.00 00:10:31.290 [2024-12-09T09:21:03.731Z] =================================================================================================================== 00:10:31.290 [2024-12-09T09:21:03.731Z] Total : 13330.00 52.07 0.00 0.00 0.00 0.00 0.00 00:10:31.290 00:10:31.548 true 00:10:31.548 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:31.548 10:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:31.805 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:31.805 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:31.805 10:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2451254 00:10:32.367 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:32.367 Nvme0n1 : 3.00 13398.67 52.34 0.00 0.00 0.00 0.00 0.00 00:10:32.367 [2024-12-09T09:21:04.808Z] =================================================================================================================== 00:10:32.367 [2024-12-09T09:21:04.808Z] Total : 13398.67 52.34 0.00 0.00 0.00 0.00 0.00 00:10:32.367 00:10:33.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:33.296 Nvme0n1 : 4.00 13473.00 52.63 0.00 0.00 0.00 0.00 0.00 00:10:33.296 [2024-12-09T09:21:05.737Z] =================================================================================================================== 00:10:33.296 [2024-12-09T09:21:05.737Z] Total : 13473.00 52.63 0.00 0.00 0.00 0.00 0.00 00:10:33.296 00:10:34.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:34.670 Nvme0n1 : 5.00 13536.80 52.88 0.00 0.00 0.00 0.00 0.00 00:10:34.670 [2024-12-09T09:21:07.111Z] =================================================================================================================== 00:10:34.670 [2024-12-09T09:21:07.111Z] Total : 13536.80 52.88 0.00 0.00 0.00 0.00 0.00 00:10:34.670 00:10:35.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:35.601 Nvme0n1 : 6.00 13566.00 52.99 0.00 0.00 0.00 0.00 0.00 00:10:35.601 [2024-12-09T09:21:08.042Z] =================================================================================================================== 00:10:35.601 [2024-12-09T09:21:08.042Z] Total : 13566.00 52.99 0.00 0.00 0.00 0.00 0.00 00:10:35.601 00:10:36.532 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:36.532 Nvme0n1 : 7.00 13602.86 53.14 0.00 0.00 0.00 0.00 0.00 00:10:36.532 [2024-12-09T09:21:08.973Z] =================================================================================================================== 00:10:36.532 [2024-12-09T09:21:08.973Z] Total : 13602.86 53.14 0.00 0.00 0.00 0.00 0.00 00:10:36.532 00:10:37.466 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:37.466 Nvme0n1 : 8.00 13640.50 53.28 0.00 0.00 0.00 0.00 0.00 00:10:37.466 [2024-12-09T09:21:09.907Z] =================================================================================================================== 00:10:37.466 [2024-12-09T09:21:09.907Z] Total : 13640.50 53.28 0.00 0.00 0.00 0.00 0.00 00:10:37.466 00:10:38.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:38.401 Nvme0n1 : 9.00 13655.56 53.34 0.00 0.00 0.00 0.00 0.00 00:10:38.401 [2024-12-09T09:21:10.842Z] =================================================================================================================== 00:10:38.401 [2024-12-09T09:21:10.842Z] Total : 13655.56 53.34 0.00 0.00 0.00 0.00 0.00 00:10:38.401 00:10:39.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.338 Nvme0n1 : 10.00 13685.20 53.46 0.00 0.00 0.00 0.00 0.00 00:10:39.338 [2024-12-09T09:21:11.779Z] =================================================================================================================== 00:10:39.338 [2024-12-09T09:21:11.779Z] Total : 13685.20 53.46 0.00 0.00 0.00 0.00 0.00 00:10:39.338 00:10:39.338 00:10:39.338 Latency(us) 00:10:39.338 [2024-12-09T09:21:11.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.338 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:39.338 Nvme0n1 : 10.01 13685.47 53.46 0.00 0.00 9344.63 5776.88 15340.28 00:10:39.338 [2024-12-09T09:21:11.779Z] =================================================================================================================== 00:10:39.338 [2024-12-09T09:21:11.779Z] Total : 13685.47 53.46 0.00 0.00 9344.63 5776.88 15340.28 00:10:39.338 { 00:10:39.338 "results": [ 00:10:39.338 { 00:10:39.338 "job": "Nvme0n1", 00:10:39.338 "core_mask": "0x2", 00:10:39.338 "workload": "randwrite", 00:10:39.338 "status": "finished", 00:10:39.338 "queue_depth": 128, 00:10:39.338 "io_size": 4096, 00:10:39.338 "runtime": 10.008569, 00:10:39.338 "iops": 13685.472918256346, 00:10:39.338 "mibps": 53.45887858693885, 00:10:39.338 "io_failed": 0, 00:10:39.338 "io_timeout": 0, 00:10:39.338 "avg_latency_us": 9344.627578104635, 00:10:39.338 "min_latency_us": 5776.877037037037, 00:10:39.338 "max_latency_us": 15340.278518518518 00:10:39.338 } 00:10:39.338 ], 00:10:39.338 "core_count": 1 00:10:39.338 } 00:10:39.338 10:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2451136 00:10:39.338 10:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2451136 ']' 00:10:39.338 10:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2451136 00:10:39.338 10:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:39.338 10:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.338 10:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2451136 00:10:39.642 10:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:39.642 10:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:39.642 10:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2451136' 00:10:39.642 killing process with pid 2451136 00:10:39.642 10:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2451136 00:10:39.642 Received shutdown signal, test time was about 10.000000 seconds 00:10:39.642 00:10:39.642 Latency(us) 00:10:39.642 [2024-12-09T09:21:12.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:39.643 [2024-12-09T09:21:12.084Z] =================================================================================================================== 00:10:39.643 [2024-12-09T09:21:12.084Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:39.643 10:21:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2451136 00:10:39.900 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:39.900 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:40.466 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:40.466 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:40.466 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:40.466 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:40.466 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2448507 00:10:40.466 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2448507 00:10:40.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2448507 Killed "${NVMF_APP[@]}" "$@" 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2453119 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2453119 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2453119 ']' 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.724 10:21:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:40.724 [2024-12-09 10:21:12.974633] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:10:40.724 [2024-12-09 10:21:12.974723] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.724 [2024-12-09 10:21:13.049861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.724 [2024-12-09 10:21:13.107515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.724 [2024-12-09 10:21:13.107568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.725 [2024-12-09 10:21:13.107596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.725 [2024-12-09 10:21:13.107608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.725 [2024-12-09 10:21:13.107617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.725 [2024-12-09 10:21:13.108184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.981 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.981 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:40.981 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:40.981 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.981 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:40.981 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.981 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:41.238 [2024-12-09 10:21:13.493155] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:41.238 [2024-12-09 10:21:13.493282] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:41.238 [2024-12-09 10:21:13.493329] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:41.238 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:41.238 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 617b348a-d012-46f9-a04e-479be6839ada 00:10:41.238 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=617b348a-d012-46f9-a04e-479be6839ada 00:10:41.238 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.238 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:41.238 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.238 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.238 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:41.495 10:21:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 617b348a-d012-46f9-a04e-479be6839ada -t 2000 00:10:41.753 [ 00:10:41.753 { 00:10:41.753 "name": "617b348a-d012-46f9-a04e-479be6839ada", 00:10:41.753 "aliases": [ 00:10:41.753 "lvs/lvol" 00:10:41.753 ], 00:10:41.753 "product_name": "Logical Volume", 00:10:41.753 "block_size": 4096, 00:10:41.753 "num_blocks": 38912, 00:10:41.753 "uuid": "617b348a-d012-46f9-a04e-479be6839ada", 00:10:41.753 "assigned_rate_limits": { 00:10:41.753 "rw_ios_per_sec": 0, 00:10:41.753 "rw_mbytes_per_sec": 0, 00:10:41.753 "r_mbytes_per_sec": 0, 00:10:41.753 "w_mbytes_per_sec": 0 00:10:41.753 }, 00:10:41.753 "claimed": false, 00:10:41.753 "zoned": false, 00:10:41.753 "supported_io_types": { 00:10:41.753 "read": true, 00:10:41.753 "write": true, 00:10:41.753 "unmap": true, 00:10:41.753 "flush": false, 00:10:41.753 "reset": true, 00:10:41.753 "nvme_admin": false, 00:10:41.753 "nvme_io": false, 00:10:41.753 "nvme_io_md": false, 00:10:41.753 "write_zeroes": true, 00:10:41.753 "zcopy": false, 00:10:41.753 "get_zone_info": false, 00:10:41.753 "zone_management": false, 00:10:41.753 "zone_append": false, 00:10:41.753 "compare": false, 00:10:41.753 "compare_and_write": false, 00:10:41.753 "abort": false, 00:10:41.753 "seek_hole": true, 00:10:41.753 "seek_data": true, 00:10:41.753 "copy": false, 00:10:41.753 "nvme_iov_md": false 00:10:41.753 }, 00:10:41.753 "driver_specific": { 00:10:41.753 "lvol": { 00:10:41.753 "lvol_store_uuid": "6d4d962a-f74c-4eba-89f5-c96e0e91797f", 00:10:41.753 "base_bdev": "aio_bdev", 00:10:41.753 "thin_provision": false, 00:10:41.753 "num_allocated_clusters": 38, 00:10:41.754 "snapshot": false, 00:10:41.754 "clone": false, 00:10:41.754 "esnap_clone": false 00:10:41.754 } 00:10:41.754 } 00:10:41.754 } 00:10:41.754 ] 00:10:41.754 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:41.754 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:41.754 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:42.011 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:42.011 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:42.011 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:42.269 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:42.269 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:42.526 [2024-12-09 10:21:14.846863] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:42.526 10:21:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:42.790 request: 00:10:42.790 { 00:10:42.790 "uuid": "6d4d962a-f74c-4eba-89f5-c96e0e91797f", 00:10:42.790 "method": "bdev_lvol_get_lvstores", 00:10:42.790 "req_id": 1 00:10:42.790 } 00:10:42.790 Got JSON-RPC error response 00:10:42.790 response: 00:10:42.790 { 00:10:42.790 "code": -19, 00:10:42.790 "message": "No such device" 00:10:42.790 } 00:10:42.790 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:42.790 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:42.790 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:42.790 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:42.790 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:43.049 aio_bdev 00:10:43.049 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 617b348a-d012-46f9-a04e-479be6839ada 00:10:43.049 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=617b348a-d012-46f9-a04e-479be6839ada 00:10:43.049 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.049 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:43.049 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.049 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.049 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:43.306 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 617b348a-d012-46f9-a04e-479be6839ada -t 2000 00:10:43.563 [ 00:10:43.563 { 00:10:43.563 "name": "617b348a-d012-46f9-a04e-479be6839ada", 00:10:43.563 "aliases": [ 00:10:43.563 "lvs/lvol" 00:10:43.563 ], 00:10:43.563 "product_name": "Logical Volume", 00:10:43.563 "block_size": 4096, 00:10:43.563 "num_blocks": 38912, 00:10:43.563 "uuid": "617b348a-d012-46f9-a04e-479be6839ada", 00:10:43.563 "assigned_rate_limits": { 00:10:43.563 "rw_ios_per_sec": 0, 00:10:43.563 "rw_mbytes_per_sec": 0, 00:10:43.563 "r_mbytes_per_sec": 0, 00:10:43.563 "w_mbytes_per_sec": 0 00:10:43.563 }, 00:10:43.563 "claimed": false, 00:10:43.563 "zoned": false, 00:10:43.563 "supported_io_types": { 00:10:43.563 "read": true, 00:10:43.563 "write": true, 00:10:43.563 "unmap": true, 00:10:43.563 "flush": false, 00:10:43.563 "reset": true, 00:10:43.563 "nvme_admin": false, 00:10:43.563 "nvme_io": false, 00:10:43.563 "nvme_io_md": false, 00:10:43.563 "write_zeroes": true, 00:10:43.563 "zcopy": false, 00:10:43.563 "get_zone_info": false, 00:10:43.563 "zone_management": false, 00:10:43.563 "zone_append": false, 00:10:43.563 "compare": false, 00:10:43.563 "compare_and_write": false, 00:10:43.563 "abort": false, 00:10:43.563 "seek_hole": true, 00:10:43.563 "seek_data": true, 00:10:43.563 "copy": false, 00:10:43.563 "nvme_iov_md": false 00:10:43.563 }, 00:10:43.563 "driver_specific": { 00:10:43.563 "lvol": { 00:10:43.563 "lvol_store_uuid": "6d4d962a-f74c-4eba-89f5-c96e0e91797f", 00:10:43.563 "base_bdev": "aio_bdev", 00:10:43.563 "thin_provision": false, 00:10:43.563 "num_allocated_clusters": 38, 00:10:43.563 "snapshot": false, 00:10:43.563 "clone": false, 00:10:43.563 "esnap_clone": false 00:10:43.563 } 00:10:43.563 } 00:10:43.563 } 00:10:43.563 ] 00:10:43.563 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:43.563 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:43.563 10:21:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:43.821 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:43.821 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:43.821 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:44.079 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:44.079 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 617b348a-d012-46f9-a04e-479be6839ada 00:10:44.337 10:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d4d962a-f74c-4eba-89f5-c96e0e91797f 00:10:44.904 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:44.904 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:44.904 00:10:44.904 real 0m19.408s 00:10:44.904 user 0m49.144s 00:10:44.904 sys 0m4.755s 00:10:44.904 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.904 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:44.904 ************************************ 00:10:44.904 END TEST lvs_grow_dirty 00:10:44.904 ************************************ 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:45.162 nvmf_trace.0 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.162 rmmod nvme_tcp 00:10:45.162 rmmod nvme_fabrics 00:10:45.162 rmmod nvme_keyring 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2453119 ']' 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2453119 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2453119 ']' 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2453119 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2453119 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2453119' 00:10:45.162 killing process with pid 2453119 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2453119 00:10:45.162 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2453119 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.420 10:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:47.968 00:10:47.968 real 0m42.794s 00:10:47.968 user 1m11.676s 00:10:47.968 sys 0m9.000s 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:47.968 ************************************ 00:10:47.968 END TEST nvmf_lvs_grow 00:10:47.968 ************************************ 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:47.968 ************************************ 00:10:47.968 START TEST nvmf_bdev_io_wait 00:10:47.968 ************************************ 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:47.968 * Looking for test storage... 00:10:47.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.968 --rc genhtml_branch_coverage=1 00:10:47.968 --rc genhtml_function_coverage=1 00:10:47.968 --rc genhtml_legend=1 00:10:47.968 --rc geninfo_all_blocks=1 00:10:47.968 --rc geninfo_unexecuted_blocks=1 00:10:47.968 00:10:47.968 ' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.968 --rc genhtml_branch_coverage=1 00:10:47.968 --rc genhtml_function_coverage=1 00:10:47.968 --rc genhtml_legend=1 00:10:47.968 --rc geninfo_all_blocks=1 00:10:47.968 --rc geninfo_unexecuted_blocks=1 00:10:47.968 00:10:47.968 ' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.968 --rc genhtml_branch_coverage=1 00:10:47.968 --rc genhtml_function_coverage=1 00:10:47.968 --rc genhtml_legend=1 00:10:47.968 --rc geninfo_all_blocks=1 00:10:47.968 --rc geninfo_unexecuted_blocks=1 00:10:47.968 00:10:47.968 ' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:47.968 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.968 --rc genhtml_branch_coverage=1 00:10:47.968 --rc genhtml_function_coverage=1 00:10:47.968 --rc genhtml_legend=1 00:10:47.968 --rc geninfo_all_blocks=1 00:10:47.968 --rc geninfo_unexecuted_blocks=1 00:10:47.968 00:10:47.968 ' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:47.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.968 10:21:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.968 10:21:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:47.968 10:21:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:47.968 10:21:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:47.968 10:21:20 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:49.875 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:49.875 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.875 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:49.876 Found net devices under 0000:09:00.0: cvl_0_0 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:49.876 Found net devices under 0000:09:00.1: cvl_0_1 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:49.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:10:49.876 00:10:49.876 --- 10.0.0.2 ping statistics --- 00:10:49.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.876 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:10:49.876 00:10:49.876 --- 10.0.0.1 ping statistics --- 00:10:49.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.876 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2455757 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2455757 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2455757 ']' 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:49.876 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.134 [2024-12-09 10:21:22.337346] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:10:50.134 [2024-12-09 10:21:22.337430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.134 [2024-12-09 10:21:22.408278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.134 [2024-12-09 10:21:22.471508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.134 [2024-12-09 10:21:22.471557] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.134 [2024-12-09 10:21:22.471574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.134 [2024-12-09 10:21:22.471586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.134 [2024-12-09 10:21:22.471596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.134 [2024-12-09 10:21:22.473262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.134 [2024-12-09 10:21:22.473314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.134 [2024-12-09 10:21:22.476162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.134 [2024-12-09 10:21:22.476167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.134 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:50.134 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:50.134 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:50.134 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:50.134 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 [2024-12-09 10:21:22.687958] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 Malloc0 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:50.393 [2024-12-09 10:21:22.741708] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2455804 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2455806 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:50.393 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:50.394 { 00:10:50.394 "params": { 00:10:50.394 "name": "Nvme$subsystem", 00:10:50.394 "trtype": "$TEST_TRANSPORT", 00:10:50.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:50.394 "adrfam": "ipv4", 00:10:50.394 "trsvcid": "$NVMF_PORT", 00:10:50.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:50.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:50.394 "hdgst": ${hdgst:-false}, 00:10:50.394 "ddgst": ${ddgst:-false} 00:10:50.394 }, 00:10:50.394 "method": "bdev_nvme_attach_controller" 00:10:50.394 } 00:10:50.394 EOF 00:10:50.394 )") 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2455808 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:50.394 { 00:10:50.394 "params": { 00:10:50.394 "name": "Nvme$subsystem", 00:10:50.394 "trtype": "$TEST_TRANSPORT", 00:10:50.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:50.394 "adrfam": "ipv4", 00:10:50.394 "trsvcid": "$NVMF_PORT", 00:10:50.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:50.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:50.394 "hdgst": ${hdgst:-false}, 00:10:50.394 "ddgst": ${ddgst:-false} 00:10:50.394 }, 00:10:50.394 "method": "bdev_nvme_attach_controller" 00:10:50.394 } 00:10:50.394 EOF 00:10:50.394 )") 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2455810 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:50.394 { 00:10:50.394 "params": { 00:10:50.394 "name": "Nvme$subsystem", 00:10:50.394 "trtype": "$TEST_TRANSPORT", 00:10:50.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:50.394 "adrfam": "ipv4", 00:10:50.394 "trsvcid": "$NVMF_PORT", 00:10:50.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:50.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:50.394 "hdgst": ${hdgst:-false}, 00:10:50.394 "ddgst": ${ddgst:-false} 00:10:50.394 }, 00:10:50.394 "method": "bdev_nvme_attach_controller" 00:10:50.394 } 00:10:50.394 EOF 00:10:50.394 )") 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:50.394 { 00:10:50.394 "params": { 00:10:50.394 "name": "Nvme$subsystem", 00:10:50.394 "trtype": "$TEST_TRANSPORT", 00:10:50.394 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:50.394 "adrfam": "ipv4", 00:10:50.394 "trsvcid": "$NVMF_PORT", 00:10:50.394 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:50.394 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:50.394 "hdgst": ${hdgst:-false}, 00:10:50.394 "ddgst": ${ddgst:-false} 00:10:50.394 }, 00:10:50.394 "method": "bdev_nvme_attach_controller" 00:10:50.394 } 00:10:50.394 EOF 00:10:50.394 )") 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2455804 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:50.394 "params": { 00:10:50.394 "name": "Nvme1", 00:10:50.394 "trtype": "tcp", 00:10:50.394 "traddr": "10.0.0.2", 00:10:50.394 "adrfam": "ipv4", 00:10:50.394 "trsvcid": "4420", 00:10:50.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.394 "hdgst": false, 00:10:50.394 "ddgst": false 00:10:50.394 }, 00:10:50.394 "method": "bdev_nvme_attach_controller" 00:10:50.394 }' 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:50.394 "params": { 00:10:50.394 "name": "Nvme1", 00:10:50.394 "trtype": "tcp", 00:10:50.394 "traddr": "10.0.0.2", 00:10:50.394 "adrfam": "ipv4", 00:10:50.394 "trsvcid": "4420", 00:10:50.394 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.394 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.394 "hdgst": false, 00:10:50.394 "ddgst": false 00:10:50.394 }, 00:10:50.394 "method": "bdev_nvme_attach_controller" 00:10:50.394 }' 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:50.394 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:50.395 "params": { 00:10:50.395 "name": "Nvme1", 00:10:50.395 "trtype": "tcp", 00:10:50.395 "traddr": "10.0.0.2", 00:10:50.395 "adrfam": "ipv4", 00:10:50.395 "trsvcid": "4420", 00:10:50.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.395 "hdgst": false, 00:10:50.395 "ddgst": false 00:10:50.395 }, 00:10:50.395 "method": "bdev_nvme_attach_controller" 00:10:50.395 }' 00:10:50.395 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:50.395 10:21:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:50.395 "params": { 00:10:50.395 "name": "Nvme1", 00:10:50.395 "trtype": "tcp", 00:10:50.395 "traddr": "10.0.0.2", 00:10:50.395 "adrfam": "ipv4", 00:10:50.395 "trsvcid": "4420", 00:10:50.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.395 "hdgst": false, 00:10:50.395 "ddgst": false 00:10:50.395 }, 00:10:50.395 "method": "bdev_nvme_attach_controller" 00:10:50.395 }' 00:10:50.395 [2024-12-09 10:21:22.792447] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:10:50.395 [2024-12-09 10:21:22.792447] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:10:50.395 [2024-12-09 10:21:22.792446] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:10:50.395 [2024-12-09 10:21:22.792548] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 10:21:22.792549] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 10:21:22.792548] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:50.395 --proc-type=auto ] 00:10:50.395 --proc-type=auto ] 00:10:50.395 [2024-12-09 10:21:22.793665] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:10:50.395 [2024-12-09 10:21:22.793734] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:50.652 [2024-12-09 10:21:22.976289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.652 [2024-12-09 10:21:23.032940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:50.652 [2024-12-09 10:21:23.081656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.909 [2024-12-09 10:21:23.138495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:50.909 [2024-12-09 10:21:23.190615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.909 [2024-12-09 10:21:23.248428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:50.909 [2024-12-09 10:21:23.266265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.909 [2024-12-09 10:21:23.316267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:51.166 Running I/O for 1 seconds... 00:10:51.166 Running I/O for 1 seconds... 00:10:51.166 Running I/O for 1 seconds... 00:10:51.166 Running I/O for 1 seconds... 00:10:52.100 188160.00 IOPS, 735.00 MiB/s 00:10:52.100 Latency(us) 00:10:52.100 [2024-12-09T09:21:24.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.100 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:52.100 Nvme1n1 : 1.00 187806.10 733.62 0.00 0.00 677.90 301.89 1868.99 00:10:52.100 [2024-12-09T09:21:24.541Z] =================================================================================================================== 00:10:52.100 [2024-12-09T09:21:24.541Z] Total : 187806.10 733.62 0.00 0.00 677.90 301.89 1868.99 00:10:52.100 6802.00 IOPS, 26.57 MiB/s 00:10:52.100 Latency(us) 00:10:52.100 [2024-12-09T09:21:24.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.100 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:52.100 Nvme1n1 : 1.02 6779.88 26.48 0.00 0.00 18670.71 6844.87 27962.03 00:10:52.100 [2024-12-09T09:21:24.541Z] =================================================================================================================== 00:10:52.100 [2024-12-09T09:21:24.541Z] Total : 6779.88 26.48 0.00 0.00 18670.71 6844.87 27962.03 00:10:52.100 8516.00 IOPS, 33.27 MiB/s 00:10:52.100 Latency(us) 00:10:52.100 [2024-12-09T09:21:24.541Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.100 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:52.100 Nvme1n1 : 1.01 8556.87 33.43 0.00 0.00 14873.41 8932.31 26408.58 00:10:52.100 [2024-12-09T09:21:24.541Z] =================================================================================================================== 00:10:52.100 [2024-12-09T09:21:24.541Z] Total : 8556.87 33.43 0.00 0.00 14873.41 8932.31 26408.58 00:10:52.358 6359.00 IOPS, 24.84 MiB/s 00:10:52.358 Latency(us) 00:10:52.358 [2024-12-09T09:21:24.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:52.358 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:52.358 Nvme1n1 : 1.01 6444.01 25.17 0.00 0.00 19794.07 5437.06 40583.77 00:10:52.358 [2024-12-09T09:21:24.799Z] =================================================================================================================== 00:10:52.358 [2024-12-09T09:21:24.799Z] Total : 6444.01 25.17 0.00 0.00 19794.07 5437.06 40583.77 00:10:52.358 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2455806 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2455808 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2455810 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:52.616 rmmod nvme_tcp 00:10:52.616 rmmod nvme_fabrics 00:10:52.616 rmmod nvme_keyring 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2455757 ']' 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2455757 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2455757 ']' 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2455757 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2455757 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2455757' 00:10:52.616 killing process with pid 2455757 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2455757 00:10:52.616 10:21:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2455757 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.885 10:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.798 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.798 00:10:54.798 real 0m7.348s 00:10:54.798 user 0m16.440s 00:10:54.798 sys 0m3.543s 00:10:54.798 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.798 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:54.798 ************************************ 00:10:54.798 END TEST nvmf_bdev_io_wait 00:10:54.798 ************************************ 00:10:54.798 10:21:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:54.798 10:21:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.798 10:21:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.798 10:21:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.798 ************************************ 00:10:54.798 START TEST nvmf_queue_depth 00:10:54.798 ************************************ 00:10:54.798 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:55.057 * Looking for test storage... 00:10:55.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:55.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.057 --rc genhtml_branch_coverage=1 00:10:55.057 --rc genhtml_function_coverage=1 00:10:55.057 --rc genhtml_legend=1 00:10:55.057 --rc geninfo_all_blocks=1 00:10:55.057 --rc geninfo_unexecuted_blocks=1 00:10:55.057 00:10:55.057 ' 00:10:55.057 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:55.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.057 --rc genhtml_branch_coverage=1 00:10:55.057 --rc genhtml_function_coverage=1 00:10:55.058 --rc genhtml_legend=1 00:10:55.058 --rc geninfo_all_blocks=1 00:10:55.058 --rc geninfo_unexecuted_blocks=1 00:10:55.058 00:10:55.058 ' 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:55.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.058 --rc genhtml_branch_coverage=1 00:10:55.058 --rc genhtml_function_coverage=1 00:10:55.058 --rc genhtml_legend=1 00:10:55.058 --rc geninfo_all_blocks=1 00:10:55.058 --rc geninfo_unexecuted_blocks=1 00:10:55.058 00:10:55.058 ' 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:55.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.058 --rc genhtml_branch_coverage=1 00:10:55.058 --rc genhtml_function_coverage=1 00:10:55.058 --rc genhtml_legend=1 00:10:55.058 --rc geninfo_all_blocks=1 00:10:55.058 --rc geninfo_unexecuted_blocks=1 00:10:55.058 00:10:55.058 ' 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:55.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:55.058 10:21:27 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.590 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:57.590 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:57.590 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:57.590 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:57.590 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:57.591 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:57.591 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:57.591 Found net devices under 0000:09:00.0: cvl_0_0 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:57.591 Found net devices under 0000:09:00.1: cvl_0_1 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:57.591 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:57.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:57.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:10:57.592 00:10:57.592 --- 10.0.0.2 ping statistics --- 00:10:57.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.592 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:57.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:57.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:10:57.592 00:10:57.592 --- 10.0.0.1 ping statistics --- 00:10:57.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:57.592 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2458039 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2458039 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2458039 ']' 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.592 10:21:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.592 [2024-12-09 10:21:29.814761] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:10:57.592 [2024-12-09 10:21:29.814845] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:57.592 [2024-12-09 10:21:29.892873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.592 [2024-12-09 10:21:29.949304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:57.592 [2024-12-09 10:21:29.949361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:57.592 [2024-12-09 10:21:29.949375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:57.592 [2024-12-09 10:21:29.949386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:57.592 [2024-12-09 10:21:29.949395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:57.592 [2024-12-09 10:21:29.949989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.851 [2024-12-09 10:21:30.104692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.851 Malloc0 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.851 [2024-12-09 10:21:30.154652] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2458064 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2458064 /var/tmp/bdevperf.sock 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2458064 ']' 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:57.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.851 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:57.851 [2024-12-09 10:21:30.204082] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:10:57.851 [2024-12-09 10:21:30.204170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2458064 ] 00:10:57.851 [2024-12-09 10:21:30.274039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:58.110 [2024-12-09 10:21:30.335577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:58.110 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.110 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:58.110 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:58.110 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.110 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:58.367 NVMe0n1 00:10:58.367 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.367 10:21:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:58.367 Running I/O for 10 seconds... 00:11:00.674 7911.00 IOPS, 30.90 MiB/s [2024-12-09T09:21:34.050Z] 8167.50 IOPS, 31.90 MiB/s [2024-12-09T09:21:34.984Z] 8194.33 IOPS, 32.01 MiB/s [2024-12-09T09:21:35.928Z] 8194.75 IOPS, 32.01 MiB/s [2024-12-09T09:21:36.863Z] 8195.80 IOPS, 32.01 MiB/s [2024-12-09T09:21:37.798Z] 8220.00 IOPS, 32.11 MiB/s [2024-12-09T09:21:39.170Z] 8270.86 IOPS, 32.31 MiB/s [2024-12-09T09:21:40.104Z] 8309.88 IOPS, 32.46 MiB/s [2024-12-09T09:21:41.036Z] 8305.78 IOPS, 32.44 MiB/s [2024-12-09T09:21:41.036Z] 8295.90 IOPS, 32.41 MiB/s 00:11:08.596 Latency(us) 00:11:08.596 [2024-12-09T09:21:41.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.596 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:08.596 Verification LBA range: start 0x0 length 0x4000 00:11:08.596 NVMe0n1 : 10.07 8336.26 32.56 0.00 0.00 122350.00 18058.81 74565.40 00:11:08.596 [2024-12-09T09:21:41.037Z] =================================================================================================================== 00:11:08.596 [2024-12-09T09:21:41.037Z] Total : 8336.26 32.56 0.00 0.00 122350.00 18058.81 74565.40 00:11:08.596 { 00:11:08.596 "results": [ 00:11:08.596 { 00:11:08.596 "job": "NVMe0n1", 00:11:08.596 "core_mask": "0x1", 00:11:08.596 "workload": "verify", 00:11:08.596 "status": "finished", 00:11:08.596 "verify_range": { 00:11:08.596 "start": 0, 00:11:08.596 "length": 16384 00:11:08.596 }, 00:11:08.596 "queue_depth": 1024, 00:11:08.596 "io_size": 4096, 00:11:08.596 "runtime": 10.074425, 00:11:08.596 "iops": 8336.25740426873, 00:11:08.596 "mibps": 32.56350548542473, 00:11:08.596 "io_failed": 0, 00:11:08.596 "io_timeout": 0, 00:11:08.596 "avg_latency_us": 122350.00138147888, 00:11:08.596 "min_latency_us": 18058.80888888889, 00:11:08.596 "max_latency_us": 74565.40444444444 00:11:08.596 } 00:11:08.596 ], 00:11:08.596 "core_count": 1 00:11:08.596 } 00:11:08.596 10:21:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2458064 00:11:08.596 10:21:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2458064 ']' 00:11:08.596 10:21:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2458064 00:11:08.596 10:21:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:08.596 10:21:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.596 10:21:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2458064 00:11:08.596 10:21:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.596 10:21:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.596 10:21:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2458064' 00:11:08.596 killing process with pid 2458064 00:11:08.596 10:21:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2458064 00:11:08.596 Received shutdown signal, test time was about 10.000000 seconds 00:11:08.596 00:11:08.596 Latency(us) 00:11:08.596 [2024-12-09T09:21:41.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:08.596 [2024-12-09T09:21:41.037Z] =================================================================================================================== 00:11:08.596 [2024-12-09T09:21:41.037Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:08.596 10:21:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2458064 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.853 rmmod nvme_tcp 00:11:08.853 rmmod nvme_fabrics 00:11:08.853 rmmod nvme_keyring 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2458039 ']' 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2458039 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2458039 ']' 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2458039 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2458039 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2458039' 00:11:08.853 killing process with pid 2458039 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2458039 00:11:08.853 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2458039 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.110 10:21:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:11.656 00:11:11.656 real 0m16.354s 00:11:11.656 user 0m21.903s 00:11:11.656 sys 0m3.653s 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:11.656 ************************************ 00:11:11.656 END TEST nvmf_queue_depth 00:11:11.656 ************************************ 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:11.656 ************************************ 00:11:11.656 START TEST nvmf_target_multipath 00:11:11.656 ************************************ 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:11.656 * Looking for test storage... 00:11:11.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:11.656 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:11.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.657 --rc genhtml_branch_coverage=1 00:11:11.657 --rc genhtml_function_coverage=1 00:11:11.657 --rc genhtml_legend=1 00:11:11.657 --rc geninfo_all_blocks=1 00:11:11.657 --rc geninfo_unexecuted_blocks=1 00:11:11.657 00:11:11.657 ' 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:11.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.657 --rc genhtml_branch_coverage=1 00:11:11.657 --rc genhtml_function_coverage=1 00:11:11.657 --rc genhtml_legend=1 00:11:11.657 --rc geninfo_all_blocks=1 00:11:11.657 --rc geninfo_unexecuted_blocks=1 00:11:11.657 00:11:11.657 ' 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:11.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.657 --rc genhtml_branch_coverage=1 00:11:11.657 --rc genhtml_function_coverage=1 00:11:11.657 --rc genhtml_legend=1 00:11:11.657 --rc geninfo_all_blocks=1 00:11:11.657 --rc geninfo_unexecuted_blocks=1 00:11:11.657 00:11:11.657 ' 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:11.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:11.657 --rc genhtml_branch_coverage=1 00:11:11.657 --rc genhtml_function_coverage=1 00:11:11.657 --rc genhtml_legend=1 00:11:11.657 --rc geninfo_all_blocks=1 00:11:11.657 --rc geninfo_unexecuted_blocks=1 00:11:11.657 00:11:11.657 ' 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:11.657 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.657 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:11.658 10:21:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:14.214 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:14.214 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:14.214 Found net devices under 0000:09:00.0: cvl_0_0 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.214 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:14.214 Found net devices under 0000:09:00.1: cvl_0_1 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:14.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:11:14.215 00:11:14.215 --- 10.0.0.2 ping statistics --- 00:11:14.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.215 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:11:14.215 00:11:14.215 --- 10.0.0.1 ping statistics --- 00:11:14.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.215 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:14.215 only one NIC for nvmf test 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:14.215 rmmod nvme_tcp 00:11:14.215 rmmod nvme_fabrics 00:11:14.215 rmmod nvme_keyring 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.215 10:21:46 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:16.157 00:11:16.157 real 0m4.664s 00:11:16.157 user 0m0.944s 00:11:16.157 sys 0m1.728s 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:16.157 ************************************ 00:11:16.157 END TEST nvmf_target_multipath 00:11:16.157 ************************************ 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:16.157 ************************************ 00:11:16.157 START TEST nvmf_zcopy 00:11:16.157 ************************************ 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:16.157 * Looking for test storage... 00:11:16.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:16.157 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.158 --rc genhtml_branch_coverage=1 00:11:16.158 --rc genhtml_function_coverage=1 00:11:16.158 --rc genhtml_legend=1 00:11:16.158 --rc geninfo_all_blocks=1 00:11:16.158 --rc geninfo_unexecuted_blocks=1 00:11:16.158 00:11:16.158 ' 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.158 --rc genhtml_branch_coverage=1 00:11:16.158 --rc genhtml_function_coverage=1 00:11:16.158 --rc genhtml_legend=1 00:11:16.158 --rc geninfo_all_blocks=1 00:11:16.158 --rc geninfo_unexecuted_blocks=1 00:11:16.158 00:11:16.158 ' 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.158 --rc genhtml_branch_coverage=1 00:11:16.158 --rc genhtml_function_coverage=1 00:11:16.158 --rc genhtml_legend=1 00:11:16.158 --rc geninfo_all_blocks=1 00:11:16.158 --rc geninfo_unexecuted_blocks=1 00:11:16.158 00:11:16.158 ' 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:16.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:16.158 --rc genhtml_branch_coverage=1 00:11:16.158 --rc genhtml_function_coverage=1 00:11:16.158 --rc genhtml_legend=1 00:11:16.158 --rc geninfo_all_blocks=1 00:11:16.158 --rc geninfo_unexecuted_blocks=1 00:11:16.158 00:11:16.158 ' 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:16.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:16.158 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:16.159 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.159 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.159 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.159 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:16.159 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:16.159 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:16.159 10:21:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:18.689 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:18.689 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:18.689 Found net devices under 0000:09:00.0: cvl_0_0 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:18.689 Found net devices under 0000:09:00.1: cvl_0_1 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.689 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:18.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:11:18.690 00:11:18.690 --- 10.0.0.2 ping statistics --- 00:11:18.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.690 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:11:18.690 00:11:18.690 --- 10.0.0.1 ping statistics --- 00:11:18.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.690 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2463391 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2463391 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2463391 ']' 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:18.690 10:21:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.690 [2024-12-09 10:21:50.921275] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:11:18.690 [2024-12-09 10:21:50.921359] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.690 [2024-12-09 10:21:50.998088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.690 [2024-12-09 10:21:51.057759] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.690 [2024-12-09 10:21:51.057805] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.690 [2024-12-09 10:21:51.057819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.690 [2024-12-09 10:21:51.057829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.690 [2024-12-09 10:21:51.057839] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.690 [2024-12-09 10:21:51.058521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.949 [2024-12-09 10:21:51.209966] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.949 [2024-12-09 10:21:51.226240] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.949 malloc0 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:18.949 { 00:11:18.949 "params": { 00:11:18.949 "name": "Nvme$subsystem", 00:11:18.949 "trtype": "$TEST_TRANSPORT", 00:11:18.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:18.949 "adrfam": "ipv4", 00:11:18.949 "trsvcid": "$NVMF_PORT", 00:11:18.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:18.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:18.949 "hdgst": ${hdgst:-false}, 00:11:18.949 "ddgst": ${ddgst:-false} 00:11:18.949 }, 00:11:18.949 "method": "bdev_nvme_attach_controller" 00:11:18.949 } 00:11:18.949 EOF 00:11:18.949 )") 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:18.949 10:21:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:18.949 "params": { 00:11:18.949 "name": "Nvme1", 00:11:18.949 "trtype": "tcp", 00:11:18.949 "traddr": "10.0.0.2", 00:11:18.949 "adrfam": "ipv4", 00:11:18.949 "trsvcid": "4420", 00:11:18.949 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:18.949 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:18.949 "hdgst": false, 00:11:18.949 "ddgst": false 00:11:18.949 }, 00:11:18.949 "method": "bdev_nvme_attach_controller" 00:11:18.949 }' 00:11:18.949 [2024-12-09 10:21:51.313517] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:11:18.950 [2024-12-09 10:21:51.313605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2463423 ] 00:11:18.950 [2024-12-09 10:21:51.384914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.208 [2024-12-09 10:21:51.445217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.483 Running I/O for 10 seconds... 00:11:21.792 5737.00 IOPS, 44.82 MiB/s [2024-12-09T09:21:55.167Z] 5801.50 IOPS, 45.32 MiB/s [2024-12-09T09:21:56.108Z] 5821.67 IOPS, 45.48 MiB/s [2024-12-09T09:21:57.039Z] 5832.00 IOPS, 45.56 MiB/s [2024-12-09T09:21:57.971Z] 5850.20 IOPS, 45.70 MiB/s [2024-12-09T09:21:58.902Z] 5853.00 IOPS, 45.73 MiB/s [2024-12-09T09:21:59.835Z] 5854.43 IOPS, 45.74 MiB/s [2024-12-09T09:22:01.208Z] 5856.50 IOPS, 45.75 MiB/s [2024-12-09T09:22:02.141Z] 5862.67 IOPS, 45.80 MiB/s [2024-12-09T09:22:02.141Z] 5864.20 IOPS, 45.81 MiB/s 00:11:29.700 Latency(us) 00:11:29.700 [2024-12-09T09:22:02.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:29.700 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:29.700 Verification LBA range: start 0x0 length 0x1000 00:11:29.700 Nvme1n1 : 10.02 5865.62 45.83 0.00 0.00 21762.55 3640.89 32234.00 00:11:29.700 [2024-12-09T09:22:02.141Z] =================================================================================================================== 00:11:29.700 [2024-12-09T09:22:02.141Z] Total : 5865.62 45.83 0.00 0.00 21762.55 3640.89 32234.00 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2464638 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:29.700 { 00:11:29.700 "params": { 00:11:29.700 "name": "Nvme$subsystem", 00:11:29.700 "trtype": "$TEST_TRANSPORT", 00:11:29.700 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:29.700 "adrfam": "ipv4", 00:11:29.700 "trsvcid": "$NVMF_PORT", 00:11:29.700 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:29.700 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:29.700 "hdgst": ${hdgst:-false}, 00:11:29.700 "ddgst": ${ddgst:-false} 00:11:29.700 }, 00:11:29.700 "method": "bdev_nvme_attach_controller" 00:11:29.700 } 00:11:29.700 EOF 00:11:29.700 )") 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:11:29.700 [2024-12-09 10:22:02.116187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.700 [2024-12-09 10:22:02.116232] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:11:29.700 10:22:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:29.700 "params": { 00:11:29.700 "name": "Nvme1", 00:11:29.700 "trtype": "tcp", 00:11:29.700 "traddr": "10.0.0.2", 00:11:29.700 "adrfam": "ipv4", 00:11:29.700 "trsvcid": "4420", 00:11:29.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:29.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:29.700 "hdgst": false, 00:11:29.700 "ddgst": false 00:11:29.700 }, 00:11:29.700 "method": "bdev_nvme_attach_controller" 00:11:29.700 }' 00:11:29.700 [2024-12-09 10:22:02.124115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.700 [2024-12-09 10:22:02.124162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.700 [2024-12-09 10:22:02.132163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.700 [2024-12-09 10:22:02.132196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.700 [2024-12-09 10:22:02.140172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.700 [2024-12-09 10:22:02.140195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.148228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.148251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.156525] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:11:29.958 [2024-12-09 10:22:02.156597] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2464638 ] 00:11:29.958 [2024-12-09 10:22:02.160244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.160267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.168252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.168274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.176273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.176295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.184294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.184316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.192317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.192340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.200340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.200362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.208361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.208382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.216384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.216406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.224401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.224435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.225547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.958 [2024-12-09 10:22:02.232469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.232494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.240493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.240528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.248497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.248517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.256530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.256550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.264508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.264528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.272533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.272553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.280555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.280574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.283776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.958 [2024-12-09 10:22:02.288576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.288595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.296604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.296625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.304654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.304690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.312671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.312706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.320693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.320727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.328737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.328776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.336735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.336772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.344737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.344763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.352755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.958 [2024-12-09 10:22:02.352779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.958 [2024-12-09 10:22:02.360792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.959 [2024-12-09 10:22:02.360827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.959 [2024-12-09 10:22:02.368816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.959 [2024-12-09 10:22:02.368851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.959 [2024-12-09 10:22:02.376829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.959 [2024-12-09 10:22:02.376857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.959 [2024-12-09 10:22:02.384832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.959 [2024-12-09 10:22:02.384851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:29.959 [2024-12-09 10:22:02.392853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:29.959 [2024-12-09 10:22:02.392873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.400892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.400912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.408905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.408929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.416943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.416975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.424948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.424970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.432990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.433028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.441013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.441036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.449033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.449054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.457052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.457073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.465085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.465105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.473098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.473118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.481120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.481161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.489167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.489190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.497170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.497192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.505206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.505227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.513227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.513248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.521247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.521268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.529274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.529296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.537293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.537315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.545317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.545338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.553339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.553359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.561362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.561382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.569386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.569411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.577392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.577412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.585435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.585460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.593460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.593497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 Running I/O for 5 seconds... 00:11:30.217 [2024-12-09 10:22:02.601474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.601493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.615802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.615831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.627102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.627130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.217 [2024-12-09 10:22:02.639772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.217 [2024-12-09 10:22:02.639801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.218 [2024-12-09 10:22:02.649945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.218 [2024-12-09 10:22:02.649973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.660746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.660773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.673294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.673323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.683791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.683817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.694840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.694867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.707693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.707720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.717804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.717831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.728299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.728326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.738931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.738959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.749599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.749626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.760248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.760275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.770871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.770913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.784451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.784479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.795070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.795098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.805311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.805338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.815558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.815585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.825948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.825976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.836938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.836965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.847552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.847579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.858231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.858257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.871007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.871034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.881228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.881255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.891659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.891686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.901730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.901758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.476 [2024-12-09 10:22:02.912226] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.476 [2024-12-09 10:22:02.912253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:02.922603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:02.922630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:02.933270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:02.933297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:02.943489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:02.943532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:02.953851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:02.953879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:02.964481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:02.964508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:02.975290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:02.975317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:02.986219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:02.986246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:02.999084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:02.999110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.009543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.009570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.020274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.020302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.032728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.032755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.042594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.042621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.053121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.053156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.063917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.063944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.074649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.074676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.085305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.085333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.098182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.098210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.107623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.107652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.118124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.118160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.128920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.128947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.141324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.141351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.152915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.152941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.162254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.162282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.734 [2024-12-09 10:22:03.173460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.734 [2024-12-09 10:22:03.173487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.185633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.185661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.195717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.195744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.205978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.206006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.216796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.216824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.229789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.229817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.239622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.239650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.250187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.250224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.260714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.260742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.271424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.271452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.281818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.281846] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.292297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.292324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.302766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.302793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.313368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.313395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.323772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.323799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.334082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.334109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.344804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.344831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.355474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.355501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.369026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.369053] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.379078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.379106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.389586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.389614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.399702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.399729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.409937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.409963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.420578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.420605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:30.992 [2024-12-09 10:22:03.431025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:30.992 [2024-12-09 10:22:03.431052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.441773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.441800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.454297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.454325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.464345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.464372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.474735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.474761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.485190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.485218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.495560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.495587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.506168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.506195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.516746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.516774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.527094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.527121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.537738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.537765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.549839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.549865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.558770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.558797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.571681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.571708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.250 [2024-12-09 10:22:03.583533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.250 [2024-12-09 10:22:03.583568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.251 [2024-12-09 10:22:03.592310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.251 [2024-12-09 10:22:03.592337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.251 11902.00 IOPS, 92.98 MiB/s [2024-12-09T09:22:03.692Z] [2024-12-09 10:22:03.603634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.251 [2024-12-09 10:22:03.603661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.251 [2024-12-09 10:22:03.616578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.251 [2024-12-09 10:22:03.616605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.251 [2024-12-09 10:22:03.628284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.251 [2024-12-09 10:22:03.628311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.251 [2024-12-09 10:22:03.637938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.251 [2024-12-09 10:22:03.637965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.251 [2024-12-09 10:22:03.648179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.251 [2024-12-09 10:22:03.648207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.251 [2024-12-09 10:22:03.658823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.251 [2024-12-09 10:22:03.658850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.251 [2024-12-09 10:22:03.669503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.251 [2024-12-09 10:22:03.669531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.251 [2024-12-09 10:22:03.682386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.251 [2024-12-09 10:22:03.682413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.692449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.692476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.702926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.702953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.713516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.713543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.724185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.724213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.734685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.734712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.745202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.745229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.755706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.755733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.766354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.766381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.776981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.777008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.789890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.789941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.802668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.802695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.812658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.812685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.823020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.823048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.833802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.833829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.844270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.844296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.854488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.854515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.864816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.864843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.875560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.875587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.885962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.885988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.896338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.896365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.907123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.509 [2024-12-09 10:22:03.907161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.509 [2024-12-09 10:22:03.919568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.510 [2024-12-09 10:22:03.919594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.510 [2024-12-09 10:22:03.929494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.510 [2024-12-09 10:22:03.929521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.510 [2024-12-09 10:22:03.940048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.510 [2024-12-09 10:22:03.940075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.510 [2024-12-09 10:22:03.950401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.510 [2024-12-09 10:22:03.950428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:03.960908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:03.960935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:03.971587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:03.971614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:03.982050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:03.982077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:03.992905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:03.992941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.003217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.003244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.013576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.013603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.024422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.024450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.037859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.037886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.048263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.048291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.058709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.058736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.071376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.071403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.081464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.081491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.092054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.092081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.102744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.102770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.113035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.113063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.123604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.123632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.133984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.134011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.143971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.768 [2024-12-09 10:22:04.143999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.768 [2024-12-09 10:22:04.154281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.769 [2024-12-09 10:22:04.154309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.769 [2024-12-09 10:22:04.164764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.769 [2024-12-09 10:22:04.164792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.769 [2024-12-09 10:22:04.175470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.769 [2024-12-09 10:22:04.175497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.769 [2024-12-09 10:22:04.185805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.769 [2024-12-09 10:22:04.185833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.769 [2024-12-09 10:22:04.196265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.769 [2024-12-09 10:22:04.196292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:31.769 [2024-12-09 10:22:04.206611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:31.769 [2024-12-09 10:22:04.206653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.026 [2024-12-09 10:22:04.217160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.026 [2024-12-09 10:22:04.217187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.026 [2024-12-09 10:22:04.227750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.026 [2024-12-09 10:22:04.227777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.026 [2024-12-09 10:22:04.240002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.026 [2024-12-09 10:22:04.240029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.026 [2024-12-09 10:22:04.250254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.026 [2024-12-09 10:22:04.250281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.026 [2024-12-09 10:22:04.260935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.026 [2024-12-09 10:22:04.260962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.026 [2024-12-09 10:22:04.273135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.026 [2024-12-09 10:22:04.273173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.026 [2024-12-09 10:22:04.283219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.026 [2024-12-09 10:22:04.283246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.026 [2024-12-09 10:22:04.293794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.026 [2024-12-09 10:22:04.293821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.026 [2024-12-09 10:22:04.306445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.026 [2024-12-09 10:22:04.306471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.026 [2024-12-09 10:22:04.316355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.316383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.327137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.327173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.339417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.339445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.349227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.349254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.359651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.359679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.370185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.370213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.380738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.380766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.391091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.391118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.401681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.401709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.412564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.412592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.424801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.424829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.434762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.434790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.445251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.445279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.027 [2024-12-09 10:22:04.455840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.027 [2024-12-09 10:22:04.455867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.468332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.468360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.478476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.478502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.489390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.489418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.500037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.500064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.510467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.510494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.521051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.521078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.531679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.531706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.542242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.542270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.552849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.552876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.565500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.565527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.575795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.575822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.586355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.586382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.597019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.597047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 11989.50 IOPS, 93.67 MiB/s [2024-12-09T09:22:04.726Z] [2024-12-09 10:22:04.607366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.607392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.618525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.618552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.628596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.628623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.639488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.639515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.651956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.651984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.661402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.661429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.674382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.674409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.684590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.684617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.694706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.694751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.705545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.705572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.285 [2024-12-09 10:22:04.718060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.285 [2024-12-09 10:22:04.718088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.542 [2024-12-09 10:22:04.728127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.542 [2024-12-09 10:22:04.728162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.542 [2024-12-09 10:22:04.738811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.542 [2024-12-09 10:22:04.738838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.542 [2024-12-09 10:22:04.752442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.542 [2024-12-09 10:22:04.752469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.542 [2024-12-09 10:22:04.762422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.542 [2024-12-09 10:22:04.762449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.542 [2024-12-09 10:22:04.772894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.542 [2024-12-09 10:22:04.772922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.542 [2024-12-09 10:22:04.783360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.542 [2024-12-09 10:22:04.783387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.542 [2024-12-09 10:22:04.794277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.542 [2024-12-09 10:22:04.794304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.542 [2024-12-09 10:22:04.804904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.542 [2024-12-09 10:22:04.804942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.542 [2024-12-09 10:22:04.818075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.542 [2024-12-09 10:22:04.818102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.542 [2024-12-09 10:22:04.828307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.828334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.838649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.838676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.849106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.849133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.859553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.859580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.870468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.870495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.881291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.881318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.891698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.891725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.902099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.902125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.912490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.912517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.923066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.923093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.933925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.933952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.944660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.944687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.955382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.955409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.968149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.968175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.543 [2024-12-09 10:22:04.978385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.543 [2024-12-09 10:22:04.978413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:04.989161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:04.989187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.001880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.001907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.011968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.012005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.022747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.022774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.035250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.035276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.046582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.046609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.055403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.055431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.066962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.066989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.077375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.077402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.087788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.087815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.098400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.098427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.108846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.108873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.119257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.119284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.130026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.130054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.141020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.141048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.153406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.153433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.165435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.165462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.175418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.175445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.185901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.185928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.196277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.196303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.206926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.206953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.217786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.217823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.800 [2024-12-09 10:22:05.230944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.800 [2024-12-09 10:22:05.230971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:32.801 [2024-12-09 10:22:05.241381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:32.801 [2024-12-09 10:22:05.241408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.251713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.251740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.262039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.262066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.272480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.272507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.283036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.283064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.293291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.293317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.303739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.303766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.314368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.314395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.324509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.324536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.334725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.334752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.345087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.345114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.355535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.355561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.366237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.366264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.377041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.377067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.389570] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.389596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.401452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.401479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.410253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.410280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.421601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.421637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.433840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.433868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.443566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.443593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.453888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.453915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.464305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.464332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.476854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.476881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.487367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.487395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.058 [2024-12-09 10:22:05.498161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.058 [2024-12-09 10:22:05.498189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.510895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.510923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.522682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.522709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.532105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.532132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.543884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.543911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.556404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.556431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.566293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.566321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.576945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.576972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.590611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.590639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.600770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.600798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 11991.67 IOPS, 93.68 MiB/s [2024-12-09T09:22:05.757Z] [2024-12-09 10:22:05.611101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.611129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.621845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.621872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.632348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.632374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.645821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.645848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.656198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.656225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.666801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.666828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.679505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.679532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.689497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.689525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.700170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.700207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.710684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.710711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.721604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.721630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.732364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.732391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.742904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.742935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.316 [2024-12-09 10:22:05.753852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.316 [2024-12-09 10:22:05.753879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.574 [2024-12-09 10:22:05.765112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.574 [2024-12-09 10:22:05.765146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.574 [2024-12-09 10:22:05.775736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.574 [2024-12-09 10:22:05.775762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.574 [2024-12-09 10:22:05.786295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.574 [2024-12-09 10:22:05.786322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.574 [2024-12-09 10:22:05.796802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.574 [2024-12-09 10:22:05.796828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.574 [2024-12-09 10:22:05.807330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.574 [2024-12-09 10:22:05.807357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.574 [2024-12-09 10:22:05.819568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.574 [2024-12-09 10:22:05.819596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.574 [2024-12-09 10:22:05.828959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.574 [2024-12-09 10:22:05.828985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.574 [2024-12-09 10:22:05.841988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.574 [2024-12-09 10:22:05.842015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.574 [2024-12-09 10:22:05.852493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.574 [2024-12-09 10:22:05.852520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.574 [2024-12-09 10:22:05.862600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.574 [2024-12-09 10:22:05.862627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.574 [2024-12-09 10:22:05.872807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.574 [2024-12-09 10:22:05.872834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:05.882918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:05.882944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:05.893165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:05.893192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:05.903670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:05.903697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:05.916318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:05.916345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:05.925855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:05.925882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:05.936592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:05.936619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:05.947351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:05.947379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:05.960069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:05.960096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:05.970304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:05.970331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:05.981162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:05.981189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:05.993723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:05.993750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:06.004073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:06.004100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.575 [2024-12-09 10:22:06.014559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.575 [2024-12-09 10:22:06.014586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.025315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.025342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.035833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.035874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.047989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.048031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.057969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.057996] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.068554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.068581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.078862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.078891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.089087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.089114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.099436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.099463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.110374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.110402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.121359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.121386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.132287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.132314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.145000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.145026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.155110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.155147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.165677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.832 [2024-12-09 10:22:06.165704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.832 [2024-12-09 10:22:06.178724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.833 [2024-12-09 10:22:06.178751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.833 [2024-12-09 10:22:06.191457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.833 [2024-12-09 10:22:06.191484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.833 [2024-12-09 10:22:06.201849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.833 [2024-12-09 10:22:06.201875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.833 [2024-12-09 10:22:06.212360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.833 [2024-12-09 10:22:06.212387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.833 [2024-12-09 10:22:06.224526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.833 [2024-12-09 10:22:06.224554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.833 [2024-12-09 10:22:06.233799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.833 [2024-12-09 10:22:06.233826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.833 [2024-12-09 10:22:06.244847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.833 [2024-12-09 10:22:06.244884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.833 [2024-12-09 10:22:06.255722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.833 [2024-12-09 10:22:06.255749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:33.833 [2024-12-09 10:22:06.269214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:33.833 [2024-12-09 10:22:06.269241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.279272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.279299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.289421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.289448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.299999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.300026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.312379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.312407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.322094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.322126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.333164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.333192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.345661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.345689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.355966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.355994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.366433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.366460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.377505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.377532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.388361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.388388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.401085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.401112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.410917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.410944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.090 [2024-12-09 10:22:06.421388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.090 [2024-12-09 10:22:06.421416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.091 [2024-12-09 10:22:06.432454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.091 [2024-12-09 10:22:06.432481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.091 [2024-12-09 10:22:06.444804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.091 [2024-12-09 10:22:06.444831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.091 [2024-12-09 10:22:06.454949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.091 [2024-12-09 10:22:06.454988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.091 [2024-12-09 10:22:06.465425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.091 [2024-12-09 10:22:06.465453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.091 [2024-12-09 10:22:06.476015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.091 [2024-12-09 10:22:06.476042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.091 [2024-12-09 10:22:06.487099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.091 [2024-12-09 10:22:06.487126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.091 [2024-12-09 10:22:06.499301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.091 [2024-12-09 10:22:06.499328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.091 [2024-12-09 10:22:06.509304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.091 [2024-12-09 10:22:06.509331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.091 [2024-12-09 10:22:06.520162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.091 [2024-12-09 10:22:06.520188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.091 [2024-12-09 10:22:06.530912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.091 [2024-12-09 10:22:06.530939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.348 [2024-12-09 10:22:06.541761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.348 [2024-12-09 10:22:06.541788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.348 [2024-12-09 10:22:06.554031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.348 [2024-12-09 10:22:06.554058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.348 [2024-12-09 10:22:06.564310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.348 [2024-12-09 10:22:06.564337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.348 [2024-12-09 10:22:06.574970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.348 [2024-12-09 10:22:06.574998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.348 [2024-12-09 10:22:06.589354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.348 [2024-12-09 10:22:06.589381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.348 [2024-12-09 10:22:06.599905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.348 [2024-12-09 10:22:06.599932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 11994.25 IOPS, 93.71 MiB/s [2024-12-09T09:22:06.790Z] [2024-12-09 10:22:06.610626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.610654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.624055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.624083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.634263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.634291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.644773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.644800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.655346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.655374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.666023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.666050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.676705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.676732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.688872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.688899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.697664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.697692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.709054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.709083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.721495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.721523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.731470] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.731497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.741953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.741980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.753015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.753042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.763582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.763610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.776409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.776437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.349 [2024-12-09 10:22:06.786582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.349 [2024-12-09 10:22:06.786609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.796975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.797002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.807455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.807483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.817801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.817827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.828862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.828889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.841701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.841728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.851444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.851471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.861655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.861681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.872064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.872091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.883036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.883064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.897134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.897169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.907635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.907662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.918180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.918217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.929225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.929252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.939738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.939764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.950118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.950153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.961267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.961294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.973840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.973868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.985464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.985490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:06.994123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:06.994158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:07.005963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:07.005990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:07.018615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:07.018642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:07.028533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:07.028560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.607 [2024-12-09 10:22:07.039275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.607 [2024-12-09 10:22:07.039312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.050052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.050079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.062921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.062949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.073088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.073116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.083352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.083380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.093824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.093851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.104811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.104838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.115638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.115666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.126494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.126521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.139182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.139209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.150758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.150786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.159915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.159942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.171431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.171458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.184037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.184064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.194261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.194291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.865 [2024-12-09 10:22:07.204670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.865 [2024-12-09 10:22:07.204698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.866 [2024-12-09 10:22:07.215419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.866 [2024-12-09 10:22:07.215446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.866 [2024-12-09 10:22:07.228648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.866 [2024-12-09 10:22:07.228674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.866 [2024-12-09 10:22:07.239085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.866 [2024-12-09 10:22:07.239112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.866 [2024-12-09 10:22:07.249622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.866 [2024-12-09 10:22:07.249649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.866 [2024-12-09 10:22:07.262353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.866 [2024-12-09 10:22:07.262381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.866 [2024-12-09 10:22:07.272368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.866 [2024-12-09 10:22:07.272395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.866 [2024-12-09 10:22:07.282916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.866 [2024-12-09 10:22:07.282943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:34.866 [2024-12-09 10:22:07.299475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:34.866 [2024-12-09 10:22:07.299505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.309342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.309369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.320501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.320528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.333081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.333109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.344745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.344772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.354074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.354101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.365849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.365876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.379013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.379055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.389306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.389334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.399812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.399854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.410115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.410150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.420486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.420513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.430930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.430957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.441037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.441064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.451346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.451373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.462189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.462216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.472463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.472490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.123 [2024-12-09 10:22:07.483237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.123 [2024-12-09 10:22:07.483264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.124 [2024-12-09 10:22:07.495629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.124 [2024-12-09 10:22:07.495668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.124 [2024-12-09 10:22:07.505488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.124 [2024-12-09 10:22:07.505515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.124 [2024-12-09 10:22:07.516246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.124 [2024-12-09 10:22:07.516273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.124 [2024-12-09 10:22:07.528448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.124 [2024-12-09 10:22:07.528476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.124 [2024-12-09 10:22:07.538339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.124 [2024-12-09 10:22:07.538366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.124 [2024-12-09 10:22:07.548664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.124 [2024-12-09 10:22:07.548691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.124 [2024-12-09 10:22:07.559104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.124 [2024-12-09 10:22:07.559130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.569725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.569752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.579834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.579861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.590051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.590078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.600372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.600400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 11990.80 IOPS, 93.68 MiB/s [2024-12-09T09:22:07.822Z] [2024-12-09 10:22:07.610978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.611004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.617349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.617376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 00:11:35.381 Latency(us) 00:11:35.381 [2024-12-09T09:22:07.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:35.381 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:35.381 Nvme1n1 : 5.01 11992.29 93.69 0.00 0.00 10660.27 4417.61 25826.04 00:11:35.381 [2024-12-09T09:22:07.822Z] =================================================================================================================== 00:11:35.381 [2024-12-09T09:22:07.822Z] Total : 11992.29 93.69 0.00 0.00 10660.27 4417.61 25826.04 00:11:35.381 [2024-12-09 10:22:07.623131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.623161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.631159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.631183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.639178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.639200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.647240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.647305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.655263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.655308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.663274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.663317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.671294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.671335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.679313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.679354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.687336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.687379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.695369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.695412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.703380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.703424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.711405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.711448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.719428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.719472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.727453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.727496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.735473] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.735517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.743495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.743538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.751514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.751557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.759532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.759573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.767563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.767603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.775537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.775558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.783552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.783573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.791575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.791595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.799606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.799641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.807663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.807702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.381 [2024-12-09 10:22:07.815703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.381 [2024-12-09 10:22:07.815746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.639 [2024-12-09 10:22:07.823703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.639 [2024-12-09 10:22:07.823740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.639 [2024-12-09 10:22:07.831684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.639 [2024-12-09 10:22:07.831704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.639 [2024-12-09 10:22:07.839702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.639 [2024-12-09 10:22:07.839722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.639 [2024-12-09 10:22:07.847722] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.639 [2024-12-09 10:22:07.847742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.639 [2024-12-09 10:22:07.855744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.639 [2024-12-09 10:22:07.855763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.639 [2024-12-09 10:22:07.863766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.639 [2024-12-09 10:22:07.863785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.639 [2024-12-09 10:22:07.871794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.639 [2024-12-09 10:22:07.871815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.639 [2024-12-09 10:22:07.879811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:35.639 [2024-12-09 10:22:07.879832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:35.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2464638) - No such process 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2464638 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.639 delay0 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.639 10:22:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:35.639 [2024-12-09 10:22:08.001991] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:42.199 Initializing NVMe Controllers 00:11:42.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:42.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:42.199 Initialization complete. Launching workers. 00:11:42.199 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 175 00:11:42.199 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 462, failed to submit 33 00:11:42.199 success 300, unsuccessful 162, failed 0 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.199 rmmod nvme_tcp 00:11:42.199 rmmod nvme_fabrics 00:11:42.199 rmmod nvme_keyring 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2463391 ']' 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2463391 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2463391 ']' 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2463391 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2463391 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2463391' 00:11:42.199 killing process with pid 2463391 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2463391 00:11:42.199 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2463391 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:42.200 10:22:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.194 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.194 00:11:44.194 real 0m28.245s 00:11:44.194 user 0m41.757s 00:11:44.194 sys 0m8.270s 00:11:44.194 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.194 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:44.194 ************************************ 00:11:44.194 END TEST nvmf_zcopy 00:11:44.194 ************************************ 00:11:44.194 10:22:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:44.194 10:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.194 10:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.194 10:22:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:44.452 ************************************ 00:11:44.452 START TEST nvmf_nmic 00:11:44.452 ************************************ 00:11:44.452 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:44.452 * Looking for test storage... 00:11:44.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:44.452 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:44.452 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:11:44.452 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:44.452 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:44.452 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.452 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.452 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.452 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.452 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.452 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.453 --rc genhtml_branch_coverage=1 00:11:44.453 --rc genhtml_function_coverage=1 00:11:44.453 --rc genhtml_legend=1 00:11:44.453 --rc geninfo_all_blocks=1 00:11:44.453 --rc geninfo_unexecuted_blocks=1 00:11:44.453 00:11:44.453 ' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.453 --rc genhtml_branch_coverage=1 00:11:44.453 --rc genhtml_function_coverage=1 00:11:44.453 --rc genhtml_legend=1 00:11:44.453 --rc geninfo_all_blocks=1 00:11:44.453 --rc geninfo_unexecuted_blocks=1 00:11:44.453 00:11:44.453 ' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.453 --rc genhtml_branch_coverage=1 00:11:44.453 --rc genhtml_function_coverage=1 00:11:44.453 --rc genhtml_legend=1 00:11:44.453 --rc geninfo_all_blocks=1 00:11:44.453 --rc geninfo_unexecuted_blocks=1 00:11:44.453 00:11:44.453 ' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:44.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.453 --rc genhtml_branch_coverage=1 00:11:44.453 --rc genhtml_function_coverage=1 00:11:44.453 --rc genhtml_legend=1 00:11:44.453 --rc geninfo_all_blocks=1 00:11:44.453 --rc geninfo_unexecuted_blocks=1 00:11:44.453 00:11:44.453 ' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.453 10:22:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:46.977 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:46.977 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:46.977 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:46.978 Found net devices under 0000:09:00.0: cvl_0_0 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:46.978 Found net devices under 0000:09:00.1: cvl_0_1 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:46.978 10:22:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:46.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:46.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:11:46.978 00:11:46.978 --- 10.0.0.2 ping statistics --- 00:11:46.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.978 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:46.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:46.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:11:46.978 00:11:46.978 --- 10.0.0.1 ping statistics --- 00:11:46.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:46.978 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2468025 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2468025 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2468025 ']' 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.978 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:46.978 [2024-12-09 10:22:19.182598] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:11:46.978 [2024-12-09 10:22:19.182692] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.978 [2024-12-09 10:22:19.257154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.978 [2024-12-09 10:22:19.316725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.978 [2024-12-09 10:22:19.316774] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.978 [2024-12-09 10:22:19.316796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.978 [2024-12-09 10:22:19.316807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.978 [2024-12-09 10:22:19.316816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.978 [2024-12-09 10:22:19.318251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.978 [2024-12-09 10:22:19.318310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.978 [2024-12-09 10:22:19.318379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.978 [2024-12-09 10:22:19.318376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.237 [2024-12-09 10:22:19.461678] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.237 Malloc0 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.237 [2024-12-09 10:22:19.526602] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:47.237 test case1: single bdev can't be used in multiple subsystems 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.237 [2024-12-09 10:22:19.550407] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:47.237 [2024-12-09 10:22:19.550453] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:47.237 [2024-12-09 10:22:19.550478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:47.237 request: 00:11:47.237 { 00:11:47.237 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:47.237 "namespace": { 00:11:47.237 "bdev_name": "Malloc0", 00:11:47.237 "no_auto_visible": false, 00:11:47.237 "hide_metadata": false 00:11:47.237 }, 00:11:47.237 "method": "nvmf_subsystem_add_ns", 00:11:47.237 "req_id": 1 00:11:47.237 } 00:11:47.237 Got JSON-RPC error response 00:11:47.237 response: 00:11:47.237 { 00:11:47.237 "code": -32602, 00:11:47.237 "message": "Invalid parameters" 00:11:47.237 } 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:47.237 Adding namespace failed - expected result. 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:47.237 test case2: host connect to nvmf target in multiple paths 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:47.237 [2024-12-09 10:22:19.558547] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:47.237 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.238 10:22:19 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.172 10:22:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:48.738 10:22:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.738 10:22:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:48.738 10:22:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.738 10:22:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:48.738 10:22:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:50.636 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:50.636 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:50.636 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.636 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:50.636 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.636 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:50.636 10:22:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:50.636 [global] 00:11:50.636 thread=1 00:11:50.636 invalidate=1 00:11:50.636 rw=write 00:11:50.636 time_based=1 00:11:50.636 runtime=1 00:11:50.636 ioengine=libaio 00:11:50.636 direct=1 00:11:50.636 bs=4096 00:11:50.636 iodepth=1 00:11:50.636 norandommap=0 00:11:50.636 numjobs=1 00:11:50.636 00:11:50.636 verify_dump=1 00:11:50.636 verify_backlog=512 00:11:50.636 verify_state_save=0 00:11:50.636 do_verify=1 00:11:50.636 verify=crc32c-intel 00:11:50.636 [job0] 00:11:50.636 filename=/dev/nvme0n1 00:11:50.636 Could not set queue depth (nvme0n1) 00:11:50.893 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:50.893 fio-3.35 00:11:50.893 Starting 1 thread 00:11:52.265 00:11:52.265 job0: (groupid=0, jobs=1): err= 0: pid=2468661: Mon Dec 9 10:22:24 2024 00:11:52.265 read: IOPS=21, BW=86.4KiB/s (88.4kB/s)(88.0KiB/1019msec) 00:11:52.265 slat (nsec): min=14479, max=34511, avg=19930.09, stdev=7660.75 00:11:52.265 clat (usec): min=40906, max=42084, avg=41478.41, stdev=517.75 00:11:52.265 lat (usec): min=40940, max=42102, avg=41498.34, stdev=518.39 00:11:52.265 clat percentiles (usec): 00:11:52.265 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:52.265 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:11:52.265 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:52.265 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:52.265 | 99.99th=[42206] 00:11:52.265 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:11:52.265 slat (nsec): min=6527, max=53839, avg=18220.23, stdev=7067.26 00:11:52.265 clat (usec): min=123, max=287, avg=183.79, stdev=23.39 00:11:52.265 lat (usec): min=130, max=322, avg=202.01, stdev=25.11 00:11:52.265 clat percentiles (usec): 00:11:52.265 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 165], 00:11:52.265 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:11:52.265 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 227], 00:11:52.265 | 99.00th=[ 260], 99.50th=[ 285], 99.90th=[ 289], 99.95th=[ 289], 00:11:52.265 | 99.99th=[ 289] 00:11:52.265 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:52.265 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:52.265 lat (usec) : 250=94.19%, 500=1.69% 00:11:52.265 lat (msec) : 50=4.12% 00:11:52.265 cpu : usr=0.98%, sys=0.79%, ctx=534, majf=0, minf=1 00:11:52.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:52.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:52.265 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:52.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:52.265 00:11:52.265 Run status group 0 (all jobs): 00:11:52.265 READ: bw=86.4KiB/s (88.4kB/s), 86.4KiB/s-86.4KiB/s (88.4kB/s-88.4kB/s), io=88.0KiB (90.1kB), run=1019-1019msec 00:11:52.265 WRITE: bw=2010KiB/s (2058kB/s), 2010KiB/s-2010KiB/s (2058kB/s-2058kB/s), io=2048KiB (2097kB), run=1019-1019msec 00:11:52.265 00:11:52.265 Disk stats (read/write): 00:11:52.265 nvme0n1: ios=69/512, merge=0/0, ticks=810/89, in_queue=899, util=91.68% 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:52.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.265 rmmod nvme_tcp 00:11:52.265 rmmod nvme_fabrics 00:11:52.265 rmmod nvme_keyring 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2468025 ']' 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2468025 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2468025 ']' 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2468025 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2468025 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2468025' 00:11:52.265 killing process with pid 2468025 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2468025 00:11:52.265 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2468025 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.523 10:22:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.059 10:22:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.059 00:11:55.059 real 0m10.276s 00:11:55.059 user 0m23.210s 00:11:55.059 sys 0m2.467s 00:11:55.059 10:22:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.059 10:22:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:55.059 ************************************ 00:11:55.059 END TEST nvmf_nmic 00:11:55.059 ************************************ 00:11:55.059 10:22:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:55.059 10:22:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.059 10:22:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.059 10:22:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:55.059 ************************************ 00:11:55.059 START TEST nvmf_fio_target 00:11:55.059 ************************************ 00:11:55.059 10:22:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:55.059 * Looking for test storage... 00:11:55.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:55.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.059 --rc genhtml_branch_coverage=1 00:11:55.059 --rc genhtml_function_coverage=1 00:11:55.059 --rc genhtml_legend=1 00:11:55.059 --rc geninfo_all_blocks=1 00:11:55.059 --rc geninfo_unexecuted_blocks=1 00:11:55.059 00:11:55.059 ' 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:55.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.059 --rc genhtml_branch_coverage=1 00:11:55.059 --rc genhtml_function_coverage=1 00:11:55.059 --rc genhtml_legend=1 00:11:55.059 --rc geninfo_all_blocks=1 00:11:55.059 --rc geninfo_unexecuted_blocks=1 00:11:55.059 00:11:55.059 ' 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:55.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.059 --rc genhtml_branch_coverage=1 00:11:55.059 --rc genhtml_function_coverage=1 00:11:55.059 --rc genhtml_legend=1 00:11:55.059 --rc geninfo_all_blocks=1 00:11:55.059 --rc geninfo_unexecuted_blocks=1 00:11:55.059 00:11:55.059 ' 00:11:55.059 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:55.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.059 --rc genhtml_branch_coverage=1 00:11:55.059 --rc genhtml_function_coverage=1 00:11:55.059 --rc genhtml_legend=1 00:11:55.059 --rc geninfo_all_blocks=1 00:11:55.059 --rc geninfo_unexecuted_blocks=1 00:11:55.059 00:11:55.060 ' 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.060 10:22:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:56.967 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:56.967 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.967 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:56.968 Found net devices under 0000:09:00.0: cvl_0_0 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:56.968 Found net devices under 0000:09:00.1: cvl_0_1 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:56.968 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.227 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.227 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:11:57.227 00:11:57.227 --- 10.0.0.2 ping statistics --- 00:11:57.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.227 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:11:57.227 00:11:57.227 --- 10.0.0.1 ping statistics --- 00:11:57.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.227 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2470817 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2470817 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2470817 ']' 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.227 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.227 [2024-12-09 10:22:29.557460] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:11:57.227 [2024-12-09 10:22:29.557545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.227 [2024-12-09 10:22:29.633212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.485 [2024-12-09 10:22:29.695533] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.485 [2024-12-09 10:22:29.695586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.485 [2024-12-09 10:22:29.695598] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.485 [2024-12-09 10:22:29.695609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.485 [2024-12-09 10:22:29.695619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.485 [2024-12-09 10:22:29.697352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.485 [2024-12-09 10:22:29.697433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.485 [2024-12-09 10:22:29.697437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.485 [2024-12-09 10:22:29.697378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.485 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.485 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:57.485 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:57.485 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:57.485 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.485 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.485 10:22:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:57.743 [2024-12-09 10:22:30.098784] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.743 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.002 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:58.002 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.564 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:58.564 10:22:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.821 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:58.821 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.078 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:59.078 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:59.335 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.593 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:59.593 10:22:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:59.850 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:59.850 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:00.158 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:00.158 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:00.415 10:22:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:00.672 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:00.672 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:00.930 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:00.930 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:01.187 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.444 [2024-12-09 10:22:33.782747] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.444 10:22:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:01.701 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:01.958 10:22:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:02.891 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:02.891 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:02.891 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.891 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:02.891 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:02.891 10:22:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:04.787 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:04.787 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:04.787 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.787 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:04.787 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.787 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:04.787 10:22:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:04.787 [global] 00:12:04.787 thread=1 00:12:04.787 invalidate=1 00:12:04.787 rw=write 00:12:04.787 time_based=1 00:12:04.787 runtime=1 00:12:04.787 ioengine=libaio 00:12:04.787 direct=1 00:12:04.787 bs=4096 00:12:04.787 iodepth=1 00:12:04.787 norandommap=0 00:12:04.787 numjobs=1 00:12:04.787 00:12:04.787 verify_dump=1 00:12:04.787 verify_backlog=512 00:12:04.787 verify_state_save=0 00:12:04.787 do_verify=1 00:12:04.787 verify=crc32c-intel 00:12:04.787 [job0] 00:12:04.787 filename=/dev/nvme0n1 00:12:04.787 [job1] 00:12:04.787 filename=/dev/nvme0n2 00:12:04.787 [job2] 00:12:04.787 filename=/dev/nvme0n3 00:12:04.787 [job3] 00:12:04.787 filename=/dev/nvme0n4 00:12:04.787 Could not set queue depth (nvme0n1) 00:12:04.787 Could not set queue depth (nvme0n2) 00:12:04.787 Could not set queue depth (nvme0n3) 00:12:04.787 Could not set queue depth (nvme0n4) 00:12:05.044 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.044 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.044 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.044 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:05.044 fio-3.35 00:12:05.044 Starting 4 threads 00:12:06.417 00:12:06.417 job0: (groupid=0, jobs=1): err= 0: pid=2471907: Mon Dec 9 10:22:38 2024 00:12:06.417 read: IOPS=2037, BW=8152KiB/s (8347kB/s)(8160KiB/1001msec) 00:12:06.417 slat (nsec): min=4230, max=65569, avg=11190.48, stdev=7453.88 00:12:06.417 clat (usec): min=187, max=562, avg=267.69, stdev=67.92 00:12:06.417 lat (usec): min=192, max=583, avg=278.88, stdev=71.75 00:12:06.417 clat percentiles (usec): 00:12:06.417 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:12:06.417 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 249], 00:12:06.417 | 70.00th=[ 258], 80.00th=[ 297], 90.00th=[ 379], 95.00th=[ 437], 00:12:06.417 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 553], 99.95th=[ 553], 00:12:06.417 | 99.99th=[ 562] 00:12:06.417 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:06.417 slat (nsec): min=5512, max=59263, avg=12520.23, stdev=5422.74 00:12:06.417 clat (usec): min=133, max=445, avg=191.12, stdev=40.85 00:12:06.417 lat (usec): min=139, max=460, avg=203.64, stdev=41.72 00:12:06.417 clat percentiles (usec): 00:12:06.417 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:12:06.417 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 176], 60.00th=[ 186], 00:12:06.417 | 70.00th=[ 215], 80.00th=[ 231], 90.00th=[ 251], 95.00th=[ 269], 00:12:06.417 | 99.00th=[ 302], 99.50th=[ 310], 99.90th=[ 375], 99.95th=[ 396], 00:12:06.417 | 99.99th=[ 445] 00:12:06.417 bw ( KiB/s): min= 8192, max= 8192, per=32.01%, avg=8192.00, stdev= 0.00, samples=1 00:12:06.417 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:06.417 lat (usec) : 250=75.07%, 500=24.68%, 750=0.24% 00:12:06.417 cpu : usr=2.70%, sys=5.00%, ctx=4088, majf=0, minf=2 00:12:06.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.417 issued rwts: total=2040,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.417 job1: (groupid=0, jobs=1): err= 0: pid=2471929: Mon Dec 9 10:22:38 2024 00:12:06.417 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:06.417 slat (nsec): min=6898, max=36601, avg=8474.12, stdev=1526.33 00:12:06.417 clat (usec): min=175, max=565, avg=325.57, stdev=41.64 00:12:06.417 lat (usec): min=184, max=574, avg=334.04, stdev=41.75 00:12:06.417 clat percentiles (usec): 00:12:06.417 | 1.00th=[ 208], 5.00th=[ 281], 10.00th=[ 293], 20.00th=[ 306], 00:12:06.417 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 326], 00:12:06.417 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 363], 95.00th=[ 408], 00:12:06.417 | 99.00th=[ 482], 99.50th=[ 502], 99.90th=[ 562], 99.95th=[ 562], 00:12:06.417 | 99.99th=[ 562] 00:12:06.417 write: IOPS=2007, BW=8032KiB/s (8225kB/s)(8040KiB/1001msec); 0 zone resets 00:12:06.417 slat (nsec): min=8090, max=35471, avg=10073.94, stdev=1634.20 00:12:06.417 clat (usec): min=142, max=482, avg=227.16, stdev=25.94 00:12:06.417 lat (usec): min=152, max=493, avg=237.24, stdev=26.01 00:12:06.417 clat percentiles (usec): 00:12:06.417 | 1.00th=[ 155], 5.00th=[ 174], 10.00th=[ 196], 20.00th=[ 215], 00:12:06.417 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:12:06.417 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 265], 00:12:06.417 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 326], 99.95th=[ 433], 00:12:06.417 | 99.99th=[ 482] 00:12:06.417 bw ( KiB/s): min= 8192, max= 8192, per=32.01%, avg=8192.00, stdev= 0.00, samples=1 00:12:06.417 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:06.417 lat (usec) : 250=49.63%, 500=50.11%, 750=0.25% 00:12:06.417 cpu : usr=2.60%, sys=4.60%, ctx=3546, majf=0, minf=1 00:12:06.417 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.417 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.417 issued rwts: total=1536,2010,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.417 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.417 job2: (groupid=0, jobs=1): err= 0: pid=2471953: Mon Dec 9 10:22:38 2024 00:12:06.417 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:06.417 slat (nsec): min=5325, max=60544, avg=18379.29, stdev=9731.42 00:12:06.417 clat (usec): min=202, max=640, avg=328.87, stdev=67.60 00:12:06.417 lat (usec): min=225, max=659, avg=347.25, stdev=65.01 00:12:06.417 clat percentiles (usec): 00:12:06.417 | 1.00th=[ 223], 5.00th=[ 237], 10.00th=[ 247], 20.00th=[ 269], 00:12:06.417 | 30.00th=[ 285], 40.00th=[ 297], 50.00th=[ 318], 60.00th=[ 338], 00:12:06.417 | 70.00th=[ 367], 80.00th=[ 392], 90.00th=[ 412], 95.00th=[ 453], 00:12:06.417 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 570], 99.95th=[ 644], 00:12:06.417 | 99.99th=[ 644] 00:12:06.417 write: IOPS=1883, BW=7532KiB/s (7713kB/s)(7540KiB/1001msec); 0 zone resets 00:12:06.417 slat (nsec): min=6807, max=61285, avg=15319.22, stdev=6620.70 00:12:06.417 clat (usec): min=133, max=442, avg=223.55, stdev=25.74 00:12:06.417 lat (usec): min=141, max=459, avg=238.87, stdev=24.32 00:12:06.417 clat percentiles (usec): 00:12:06.417 | 1.00th=[ 163], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 206], 00:12:06.417 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:12:06.417 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 265], 00:12:06.417 | 99.00th=[ 285], 99.50th=[ 314], 99.90th=[ 412], 99.95th=[ 441], 00:12:06.417 | 99.99th=[ 441] 00:12:06.417 bw ( KiB/s): min= 8192, max= 8192, per=32.01%, avg=8192.00, stdev= 0.00, samples=1 00:12:06.418 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:06.418 lat (usec) : 250=53.26%, 500=45.86%, 750=0.88% 00:12:06.418 cpu : usr=2.20%, sys=6.70%, ctx=3422, majf=0, minf=1 00:12:06.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.418 issued rwts: total=1536,1885,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.418 job3: (groupid=0, jobs=1): err= 0: pid=2471954: Mon Dec 9 10:22:38 2024 00:12:06.418 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:12:06.418 slat (nsec): min=13331, max=34103, avg=16162.86, stdev=5155.66 00:12:06.418 clat (usec): min=40897, max=42056, avg=41131.14, stdev=379.56 00:12:06.418 lat (usec): min=40912, max=42072, avg=41147.30, stdev=379.06 00:12:06.418 clat percentiles (usec): 00:12:06.418 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:06.418 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:06.418 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:12:06.418 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:06.418 | 99.99th=[42206] 00:12:06.418 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:12:06.418 slat (usec): min=8, max=14848, avg=50.22, stdev=655.35 00:12:06.418 clat (usec): min=164, max=414, avg=224.57, stdev=27.29 00:12:06.418 lat (usec): min=173, max=15249, avg=274.79, stdev=663.53 00:12:06.418 clat percentiles (usec): 00:12:06.418 | 1.00th=[ 174], 5.00th=[ 192], 10.00th=[ 200], 20.00th=[ 206], 00:12:06.418 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 227], 00:12:06.418 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:12:06.418 | 99.00th=[ 306], 99.50th=[ 343], 99.90th=[ 416], 99.95th=[ 416], 00:12:06.418 | 99.99th=[ 416] 00:12:06.418 bw ( KiB/s): min= 4096, max= 4096, per=16.01%, avg=4096.00, stdev= 0.00, samples=1 00:12:06.418 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:06.418 lat (usec) : 250=80.68%, 500=15.38% 00:12:06.418 lat (msec) : 50=3.94% 00:12:06.418 cpu : usr=1.09%, sys=0.89%, ctx=536, majf=0, minf=1 00:12:06.418 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.418 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.418 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.418 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.418 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.418 00:12:06.418 Run status group 0 (all jobs): 00:12:06.418 READ: bw=19.9MiB/s (20.8MB/s), 83.2KiB/s-8152KiB/s (85.2kB/s-8347kB/s), io=20.1MiB (21.0MB), run=1001-1009msec 00:12:06.418 WRITE: bw=25.0MiB/s (26.2MB/s), 2030KiB/s-8184KiB/s (2078kB/s-8380kB/s), io=25.2MiB (26.4MB), run=1001-1009msec 00:12:06.418 00:12:06.418 Disk stats (read/write): 00:12:06.418 nvme0n1: ios=1586/2013, merge=0/0, ticks=418/372, in_queue=790, util=86.37% 00:12:06.418 nvme0n2: ios=1391/1536, merge=0/0, ticks=441/336, in_queue=777, util=86.25% 00:12:06.418 nvme0n3: ios=1390/1536, merge=0/0, ticks=1315/323, in_queue=1638, util=97.48% 00:12:06.418 nvme0n4: ios=40/512, merge=0/0, ticks=1643/103, in_queue=1746, util=97.46% 00:12:06.418 10:22:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:06.418 [global] 00:12:06.418 thread=1 00:12:06.418 invalidate=1 00:12:06.418 rw=randwrite 00:12:06.418 time_based=1 00:12:06.418 runtime=1 00:12:06.418 ioengine=libaio 00:12:06.418 direct=1 00:12:06.418 bs=4096 00:12:06.418 iodepth=1 00:12:06.418 norandommap=0 00:12:06.418 numjobs=1 00:12:06.418 00:12:06.418 verify_dump=1 00:12:06.418 verify_backlog=512 00:12:06.418 verify_state_save=0 00:12:06.418 do_verify=1 00:12:06.418 verify=crc32c-intel 00:12:06.418 [job0] 00:12:06.418 filename=/dev/nvme0n1 00:12:06.418 [job1] 00:12:06.418 filename=/dev/nvme0n2 00:12:06.418 [job2] 00:12:06.418 filename=/dev/nvme0n3 00:12:06.418 [job3] 00:12:06.418 filename=/dev/nvme0n4 00:12:06.418 Could not set queue depth (nvme0n1) 00:12:06.418 Could not set queue depth (nvme0n2) 00:12:06.418 Could not set queue depth (nvme0n3) 00:12:06.418 Could not set queue depth (nvme0n4) 00:12:06.418 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.418 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.418 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.418 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:06.418 fio-3.35 00:12:06.418 Starting 4 threads 00:12:07.849 00:12:07.849 job0: (groupid=0, jobs=1): err= 0: pid=2472184: Mon Dec 9 10:22:39 2024 00:12:07.849 read: IOPS=2042, BW=8172KiB/s (8368kB/s)(8180KiB/1001msec) 00:12:07.849 slat (nsec): min=4436, max=35420, avg=11487.10, stdev=5895.38 00:12:07.849 clat (usec): min=191, max=510, avg=265.33, stdev=54.34 00:12:07.849 lat (usec): min=198, max=527, avg=276.81, stdev=57.40 00:12:07.849 clat percentiles (usec): 00:12:07.849 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 219], 00:12:07.849 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 262], 60.00th=[ 277], 00:12:07.849 | 70.00th=[ 285], 80.00th=[ 310], 90.00th=[ 334], 95.00th=[ 371], 00:12:07.849 | 99.00th=[ 457], 99.50th=[ 474], 99.90th=[ 482], 99.95th=[ 494], 00:12:07.849 | 99.99th=[ 510] 00:12:07.849 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:07.849 slat (nsec): min=6032, max=45246, avg=14861.36, stdev=5681.70 00:12:07.849 clat (usec): min=133, max=856, avg=189.65, stdev=60.87 00:12:07.849 lat (usec): min=142, max=869, avg=204.51, stdev=61.36 00:12:07.849 clat percentiles (usec): 00:12:07.849 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:12:07.849 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 172], 00:12:07.849 | 70.00th=[ 192], 80.00th=[ 235], 90.00th=[ 255], 95.00th=[ 318], 00:12:07.849 | 99.00th=[ 404], 99.50th=[ 433], 99.90th=[ 734], 99.95th=[ 840], 00:12:07.849 | 99.99th=[ 857] 00:12:07.849 bw ( KiB/s): min= 8192, max= 8192, per=34.46%, avg=8192.00, stdev= 0.00, samples=1 00:12:07.849 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:07.849 lat (usec) : 250=68.36%, 500=31.49%, 750=0.10%, 1000=0.05% 00:12:07.849 cpu : usr=2.70%, sys=5.70%, ctx=4094, majf=0, minf=1 00:12:07.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.849 issued rwts: total=2045,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.849 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.849 job1: (groupid=0, jobs=1): err= 0: pid=2472186: Mon Dec 9 10:22:39 2024 00:12:07.849 read: IOPS=188, BW=752KiB/s (770kB/s)(784KiB/1042msec) 00:12:07.849 slat (nsec): min=6511, max=39370, avg=18074.01, stdev=5133.23 00:12:07.849 clat (usec): min=295, max=42560, avg=4538.21, stdev=12367.85 00:12:07.849 lat (usec): min=311, max=42581, avg=4556.29, stdev=12367.94 00:12:07.849 clat percentiles (usec): 00:12:07.849 | 1.00th=[ 322], 5.00th=[ 347], 10.00th=[ 351], 20.00th=[ 359], 00:12:07.849 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 379], 60.00th=[ 383], 00:12:07.849 | 70.00th=[ 396], 80.00th=[ 412], 90.00th=[40109], 95.00th=[41157], 00:12:07.849 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:12:07.849 | 99.99th=[42730] 00:12:07.849 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:12:07.849 slat (nsec): min=7526, max=65547, avg=16056.22, stdev=8925.29 00:12:07.849 clat (usec): min=161, max=943, avg=267.30, stdev=76.34 00:12:07.849 lat (usec): min=170, max=981, avg=283.36, stdev=77.29 00:12:07.849 clat percentiles (usec): 00:12:07.849 | 1.00th=[ 169], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 212], 00:12:07.849 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 251], 60.00th=[ 269], 00:12:07.849 | 70.00th=[ 285], 80.00th=[ 314], 90.00th=[ 355], 95.00th=[ 404], 00:12:07.849 | 99.00th=[ 474], 99.50th=[ 644], 99.90th=[ 947], 99.95th=[ 947], 00:12:07.849 | 99.99th=[ 947] 00:12:07.849 bw ( KiB/s): min= 4096, max= 4096, per=17.23%, avg=4096.00, stdev= 0.00, samples=1 00:12:07.849 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:07.849 lat (usec) : 250=35.59%, 500=60.59%, 750=0.85%, 1000=0.14% 00:12:07.849 lat (msec) : 50=2.82% 00:12:07.849 cpu : usr=0.77%, sys=1.54%, ctx=708, majf=0, minf=1 00:12:07.849 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.849 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.849 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.849 issued rwts: total=196,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.849 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.849 job2: (groupid=0, jobs=1): err= 0: pid=2472187: Mon Dec 9 10:22:39 2024 00:12:07.849 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:07.849 slat (nsec): min=4605, max=74365, avg=19798.83, stdev=10779.17 00:12:07.849 clat (usec): min=213, max=534, avg=354.25, stdev=55.03 00:12:07.849 lat (usec): min=226, max=567, avg=374.04, stdev=57.94 00:12:07.849 clat percentiles (usec): 00:12:07.849 | 1.00th=[ 241], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 306], 00:12:07.849 | 30.00th=[ 318], 40.00th=[ 330], 50.00th=[ 351], 60.00th=[ 367], 00:12:07.849 | 70.00th=[ 383], 80.00th=[ 404], 90.00th=[ 429], 95.00th=[ 457], 00:12:07.849 | 99.00th=[ 494], 99.50th=[ 502], 99.90th=[ 523], 99.95th=[ 537], 00:12:07.849 | 99.99th=[ 537] 00:12:07.849 write: IOPS=1629, BW=6517KiB/s (6674kB/s)(6524KiB/1001msec); 0 zone resets 00:12:07.849 slat (nsec): min=6191, max=76709, avg=14855.33, stdev=7134.86 00:12:07.849 clat (usec): min=145, max=450, avg=236.81, stdev=48.18 00:12:07.849 lat (usec): min=156, max=470, avg=251.67, stdev=48.79 00:12:07.850 clat percentiles (usec): 00:12:07.850 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 174], 20.00th=[ 208], 00:12:07.850 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:12:07.850 | 70.00th=[ 245], 80.00th=[ 258], 90.00th=[ 302], 95.00th=[ 338], 00:12:07.850 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 449], 99.95th=[ 449], 00:12:07.850 | 99.99th=[ 449] 00:12:07.850 bw ( KiB/s): min= 8192, max= 8192, per=34.46%, avg=8192.00, stdev= 0.00, samples=1 00:12:07.850 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:07.850 lat (usec) : 250=40.16%, 500=59.58%, 750=0.25% 00:12:07.850 cpu : usr=2.40%, sys=6.40%, ctx=3171, majf=0, minf=1 00:12:07.850 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.850 issued rwts: total=1536,1631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.850 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.850 job3: (groupid=0, jobs=1): err= 0: pid=2472188: Mon Dec 9 10:22:39 2024 00:12:07.850 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:07.850 slat (nsec): min=4674, max=57407, avg=14664.96, stdev=7009.13 00:12:07.850 clat (usec): min=221, max=895, avg=334.15, stdev=61.59 00:12:07.850 lat (usec): min=227, max=904, avg=348.81, stdev=63.74 00:12:07.850 clat percentiles (usec): 00:12:07.850 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 258], 20.00th=[ 277], 00:12:07.850 | 30.00th=[ 302], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 347], 00:12:07.850 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 420], 00:12:07.850 | 99.00th=[ 529], 99.50th=[ 586], 99.90th=[ 824], 99.95th=[ 898], 00:12:07.850 | 99.99th=[ 898] 00:12:07.850 write: IOPS=2000, BW=8000KiB/s (8192kB/s)(8008KiB/1001msec); 0 zone resets 00:12:07.850 slat (nsec): min=5992, max=47677, avg=13479.90, stdev=6829.07 00:12:07.850 clat (usec): min=145, max=2758, avg=211.20, stdev=70.59 00:12:07.850 lat (usec): min=153, max=2765, avg=224.68, stdev=72.86 00:12:07.850 clat percentiles (usec): 00:12:07.850 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:12:07.850 | 30.00th=[ 184], 40.00th=[ 192], 50.00th=[ 202], 60.00th=[ 210], 00:12:07.850 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 255], 95.00th=[ 293], 00:12:07.850 | 99.00th=[ 367], 99.50th=[ 412], 99.90th=[ 424], 99.95th=[ 445], 00:12:07.850 | 99.99th=[ 2769] 00:12:07.850 bw ( KiB/s): min= 8192, max= 8192, per=34.46%, avg=8192.00, stdev= 0.00, samples=1 00:12:07.850 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:07.850 lat (usec) : 250=52.46%, 500=46.89%, 750=0.57%, 1000=0.06% 00:12:07.850 lat (msec) : 4=0.03% 00:12:07.850 cpu : usr=3.70%, sys=6.30%, ctx=3538, majf=0, minf=1 00:12:07.850 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:07.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.850 issued rwts: total=1536,2002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.850 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:07.850 00:12:07.850 Run status group 0 (all jobs): 00:12:07.850 READ: bw=19.9MiB/s (20.9MB/s), 752KiB/s-8172KiB/s (770kB/s-8368kB/s), io=20.8MiB (21.8MB), run=1001-1042msec 00:12:07.850 WRITE: bw=23.2MiB/s (24.3MB/s), 1965KiB/s-8184KiB/s (2013kB/s-8380kB/s), io=24.2MiB (25.4MB), run=1001-1042msec 00:12:07.850 00:12:07.850 Disk stats (read/write): 00:12:07.850 nvme0n1: ios=1643/2048, merge=0/0, ticks=684/374, in_queue=1058, util=100.00% 00:12:07.850 nvme0n2: ios=211/512, merge=0/0, ticks=907/137, in_queue=1044, util=91.37% 00:12:07.850 nvme0n3: ios=1221/1536, merge=0/0, ticks=683/347, in_queue=1030, util=96.36% 00:12:07.850 nvme0n4: ios=1352/1536, merge=0/0, ticks=438/311, in_queue=749, util=89.73% 00:12:07.850 10:22:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:07.850 [global] 00:12:07.850 thread=1 00:12:07.850 invalidate=1 00:12:07.850 rw=write 00:12:07.850 time_based=1 00:12:07.850 runtime=1 00:12:07.850 ioengine=libaio 00:12:07.850 direct=1 00:12:07.850 bs=4096 00:12:07.850 iodepth=128 00:12:07.850 norandommap=0 00:12:07.850 numjobs=1 00:12:07.850 00:12:07.850 verify_dump=1 00:12:07.850 verify_backlog=512 00:12:07.850 verify_state_save=0 00:12:07.850 do_verify=1 00:12:07.850 verify=crc32c-intel 00:12:07.850 [job0] 00:12:07.850 filename=/dev/nvme0n1 00:12:07.850 [job1] 00:12:07.850 filename=/dev/nvme0n2 00:12:07.850 [job2] 00:12:07.850 filename=/dev/nvme0n3 00:12:07.850 [job3] 00:12:07.850 filename=/dev/nvme0n4 00:12:07.850 Could not set queue depth (nvme0n1) 00:12:07.850 Could not set queue depth (nvme0n2) 00:12:07.850 Could not set queue depth (nvme0n3) 00:12:07.850 Could not set queue depth (nvme0n4) 00:12:07.850 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.850 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.850 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.850 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.850 fio-3.35 00:12:07.850 Starting 4 threads 00:12:09.223 00:12:09.223 job0: (groupid=0, jobs=1): err= 0: pid=2472419: Mon Dec 9 10:22:41 2024 00:12:09.223 read: IOPS=4773, BW=18.6MiB/s (19.6MB/s)(18.7MiB/1005msec) 00:12:09.223 slat (usec): min=2, max=23978, avg=94.82, stdev=661.76 00:12:09.223 clat (usec): min=713, max=50698, avg=13385.02, stdev=6224.91 00:12:09.223 lat (usec): min=1493, max=50738, avg=13479.84, stdev=6242.36 00:12:09.223 clat percentiles (usec): 00:12:09.223 | 1.00th=[ 1844], 5.00th=[ 6456], 10.00th=[ 9110], 20.00th=[11207], 00:12:09.223 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[12387], 00:12:09.223 | 70.00th=[12649], 80.00th=[13960], 90.00th=[21365], 95.00th=[28705], 00:12:09.223 | 99.00th=[40109], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:12:09.223 | 99.99th=[50594] 00:12:09.223 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:12:09.223 slat (usec): min=2, max=20914, avg=92.34, stdev=562.10 00:12:09.223 clat (usec): min=4694, max=31628, avg=12256.62, stdev=2058.27 00:12:09.223 lat (usec): min=4761, max=41288, avg=12348.96, stdev=2125.05 00:12:09.223 clat percentiles (usec): 00:12:09.223 | 1.00th=[ 8094], 5.00th=[ 9241], 10.00th=[10683], 20.00th=[11076], 00:12:09.223 | 30.00th=[11469], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:12:09.223 | 70.00th=[12518], 80.00th=[13042], 90.00th=[14222], 95.00th=[15401], 00:12:09.223 | 99.00th=[22152], 99.50th=[24511], 99.90th=[24511], 99.95th=[25035], 00:12:09.223 | 99.99th=[31589] 00:12:09.223 bw ( KiB/s): min=17832, max=23128, per=29.77%, avg=20480.00, stdev=3744.84, samples=2 00:12:09.223 iops : min= 4458, max= 5782, avg=5120.00, stdev=936.21, samples=2 00:12:09.223 lat (usec) : 750=0.01% 00:12:09.224 lat (msec) : 2=0.53%, 4=0.25%, 10=9.54%, 20=83.85%, 50=5.81% 00:12:09.224 lat (msec) : 100=0.01% 00:12:09.224 cpu : usr=3.98%, sys=7.17%, ctx=451, majf=0, minf=2 00:12:09.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:09.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.224 issued rwts: total=4797,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.224 job1: (groupid=0, jobs=1): err= 0: pid=2472420: Mon Dec 9 10:22:41 2024 00:12:09.224 read: IOPS=4017, BW=15.7MiB/s (16.5MB/s)(15.8MiB/1008msec) 00:12:09.224 slat (usec): min=3, max=16472, avg=114.15, stdev=785.89 00:12:09.224 clat (usec): min=4058, max=40148, avg=14262.19, stdev=5273.63 00:12:09.224 lat (usec): min=5100, max=40158, avg=14376.34, stdev=5320.93 00:12:09.224 clat percentiles (usec): 00:12:09.224 | 1.00th=[ 6063], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11731], 00:12:09.224 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13173], 00:12:09.224 | 70.00th=[13829], 80.00th=[15795], 90.00th=[19792], 95.00th=[25560], 00:12:09.224 | 99.00th=[36439], 99.50th=[38536], 99.90th=[40109], 99.95th=[40109], 00:12:09.224 | 99.99th=[40109] 00:12:09.224 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:12:09.224 slat (usec): min=4, max=11712, avg=119.62, stdev=670.92 00:12:09.224 clat (usec): min=1790, max=69201, avg=17096.01, stdev=13094.36 00:12:09.224 lat (usec): min=1797, max=69208, avg=17215.63, stdev=13177.34 00:12:09.224 clat percentiles (usec): 00:12:09.224 | 1.00th=[ 3654], 5.00th=[ 5932], 10.00th=[ 8717], 20.00th=[11207], 00:12:09.224 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13173], 60.00th=[13698], 00:12:09.224 | 70.00th=[14222], 80.00th=[21365], 90.00th=[24511], 95.00th=[54789], 00:12:09.224 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:12:09.224 | 99.99th=[68682] 00:12:09.224 bw ( KiB/s): min=12288, max=20480, per=23.81%, avg=16384.00, stdev=5792.62, samples=2 00:12:09.224 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:12:09.224 lat (msec) : 2=0.10%, 4=0.54%, 10=9.78%, 20=74.37%, 50=12.48% 00:12:09.224 lat (msec) : 100=2.73% 00:12:09.224 cpu : usr=5.66%, sys=7.85%, ctx=452, majf=0, minf=1 00:12:09.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:12:09.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.224 issued rwts: total=4050,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.224 job2: (groupid=0, jobs=1): err= 0: pid=2472421: Mon Dec 9 10:22:41 2024 00:12:09.224 read: IOPS=3111, BW=12.2MiB/s (12.7MB/s)(12.3MiB/1015msec) 00:12:09.224 slat (usec): min=3, max=13269, avg=149.96, stdev=954.01 00:12:09.224 clat (msec): min=4, max=100, avg=16.17, stdev= 8.60 00:12:09.224 lat (msec): min=4, max=100, avg=16.32, stdev= 8.74 00:12:09.224 clat percentiles (msec): 00:12:09.224 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 13], 00:12:09.224 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:12:09.224 | 70.00th=[ 17], 80.00th=[ 19], 90.00th=[ 24], 95.00th=[ 29], 00:12:09.224 | 99.00th=[ 56], 99.50th=[ 80], 99.90th=[ 102], 99.95th=[ 102], 00:12:09.224 | 99.99th=[ 102] 00:12:09.224 write: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec); 0 zone resets 00:12:09.224 slat (usec): min=4, max=11278, avg=138.27, stdev=694.24 00:12:09.224 clat (msec): min=3, max=127, avg=21.64, stdev=20.47 00:12:09.224 lat (msec): min=3, max=127, avg=21.78, stdev=20.56 00:12:09.224 clat percentiles (msec): 00:12:09.224 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 13], 00:12:09.224 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 15], 00:12:09.224 | 70.00th=[ 18], 80.00th=[ 25], 90.00th=[ 50], 95.00th=[ 62], 00:12:09.224 | 99.00th=[ 112], 99.50th=[ 116], 99.90th=[ 128], 99.95th=[ 128], 00:12:09.224 | 99.99th=[ 128] 00:12:09.224 bw ( KiB/s): min= 9344, max=18992, per=20.59%, avg=14168.00, stdev=6822.17, samples=2 00:12:09.224 iops : min= 2336, max= 4748, avg=3542.00, stdev=1705.54, samples=2 00:12:09.224 lat (msec) : 4=0.18%, 10=8.40%, 20=68.96%, 50=16.73%, 100=4.33% 00:12:09.224 lat (msec) : 250=1.41% 00:12:09.224 cpu : usr=3.16%, sys=8.48%, ctx=433, majf=0, minf=1 00:12:09.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:09.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.224 issued rwts: total=3158,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.224 job3: (groupid=0, jobs=1): err= 0: pid=2472422: Mon Dec 9 10:22:41 2024 00:12:09.224 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:12:09.224 slat (usec): min=3, max=12703, avg=118.74, stdev=831.55 00:12:09.224 clat (usec): min=4847, max=29637, avg=14967.35, stdev=4103.32 00:12:09.224 lat (usec): min=4854, max=29658, avg=15086.10, stdev=4153.92 00:12:09.224 clat percentiles (usec): 00:12:09.224 | 1.00th=[ 5932], 5.00th=[ 9896], 10.00th=[11469], 20.00th=[12125], 00:12:09.224 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13566], 60.00th=[14091], 00:12:09.224 | 70.00th=[16712], 80.00th=[18482], 90.00th=[21103], 95.00th=[22938], 00:12:09.224 | 99.00th=[26870], 99.50th=[27132], 99.90th=[29230], 99.95th=[29230], 00:12:09.224 | 99.99th=[29754] 00:12:09.224 write: IOPS=4626, BW=18.1MiB/s (18.9MB/s)(18.2MiB/1007msec); 0 zone resets 00:12:09.224 slat (usec): min=4, max=12565, avg=87.42, stdev=488.74 00:12:09.224 clat (usec): min=1340, max=27594, avg=12589.62, stdev=2806.19 00:12:09.224 lat (usec): min=1351, max=27643, avg=12677.04, stdev=2852.92 00:12:09.224 clat percentiles (usec): 00:12:09.224 | 1.00th=[ 4015], 5.00th=[ 6259], 10.00th=[ 8225], 20.00th=[11731], 00:12:09.224 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:12:09.224 | 70.00th=[13698], 80.00th=[14222], 90.00th=[14484], 95.00th=[14877], 00:12:09.224 | 99.00th=[19792], 99.50th=[21627], 99.90th=[26084], 99.95th=[26346], 00:12:09.224 | 99.99th=[27657] 00:12:09.224 bw ( KiB/s): min=17192, max=19672, per=26.79%, avg=18432.00, stdev=1753.62, samples=2 00:12:09.224 iops : min= 4298, max= 4918, avg=4608.00, stdev=438.41, samples=2 00:12:09.224 lat (msec) : 2=0.03%, 4=0.47%, 10=9.43%, 20=83.20%, 50=6.86% 00:12:09.224 cpu : usr=5.96%, sys=9.05%, ctx=528, majf=0, minf=2 00:12:09.224 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:09.224 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.224 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.224 issued rwts: total=4608,4659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.224 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.224 00:12:09.224 Run status group 0 (all jobs): 00:12:09.224 READ: bw=63.9MiB/s (67.0MB/s), 12.2MiB/s-18.6MiB/s (12.7MB/s-19.6MB/s), io=64.9MiB (68.0MB), run=1005-1015msec 00:12:09.224 WRITE: bw=67.2MiB/s (70.5MB/s), 13.8MiB/s-19.9MiB/s (14.5MB/s-20.9MB/s), io=68.2MiB (71.5MB), run=1005-1015msec 00:12:09.224 00:12:09.224 Disk stats (read/write): 00:12:09.224 nvme0n1: ios=4146/4207, merge=0/0, ticks=28483/19361, in_queue=47844, util=86.67% 00:12:09.224 nvme0n2: ios=3112/3519, merge=0/0, ticks=42152/61261, in_queue=103413, util=100.00% 00:12:09.224 nvme0n3: ios=3131/3319, merge=0/0, ticks=45316/58570, in_queue=103886, util=98.23% 00:12:09.224 nvme0n4: ios=3606/4096, merge=0/0, ticks=52885/50586, in_queue=103471, util=98.21% 00:12:09.224 10:22:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:09.224 [global] 00:12:09.224 thread=1 00:12:09.224 invalidate=1 00:12:09.224 rw=randwrite 00:12:09.224 time_based=1 00:12:09.224 runtime=1 00:12:09.224 ioengine=libaio 00:12:09.224 direct=1 00:12:09.224 bs=4096 00:12:09.224 iodepth=128 00:12:09.224 norandommap=0 00:12:09.224 numjobs=1 00:12:09.224 00:12:09.224 verify_dump=1 00:12:09.224 verify_backlog=512 00:12:09.224 verify_state_save=0 00:12:09.224 do_verify=1 00:12:09.224 verify=crc32c-intel 00:12:09.224 [job0] 00:12:09.224 filename=/dev/nvme0n1 00:12:09.224 [job1] 00:12:09.224 filename=/dev/nvme0n2 00:12:09.224 [job2] 00:12:09.224 filename=/dev/nvme0n3 00:12:09.224 [job3] 00:12:09.224 filename=/dev/nvme0n4 00:12:09.224 Could not set queue depth (nvme0n1) 00:12:09.224 Could not set queue depth (nvme0n2) 00:12:09.224 Could not set queue depth (nvme0n3) 00:12:09.224 Could not set queue depth (nvme0n4) 00:12:09.224 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.224 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.224 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.224 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:09.224 fio-3.35 00:12:09.224 Starting 4 threads 00:12:10.598 00:12:10.598 job0: (groupid=0, jobs=1): err= 0: pid=2472646: Mon Dec 9 10:22:42 2024 00:12:10.598 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:12:10.598 slat (usec): min=2, max=7718, avg=81.11, stdev=496.87 00:12:10.598 clat (usec): min=4877, max=26840, avg=11680.62, stdev=1824.68 00:12:10.598 lat (usec): min=4883, max=26848, avg=11761.74, stdev=1877.12 00:12:10.598 clat percentiles (usec): 00:12:10.598 | 1.00th=[ 5014], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10683], 00:12:10.598 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:12:10.598 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13173], 95.00th=[13829], 00:12:10.598 | 99.00th=[17695], 99.50th=[20317], 99.90th=[23462], 99.95th=[26870], 00:12:10.598 | 99.99th=[26870] 00:12:10.598 write: IOPS=5422, BW=21.2MiB/s (22.2MB/s)(21.2MiB/1001msec); 0 zone resets 00:12:10.598 slat (usec): min=3, max=10998, avg=87.91, stdev=557.54 00:12:10.598 clat (usec): min=435, max=47230, avg=12033.53, stdev=5859.51 00:12:10.598 lat (usec): min=516, max=47240, avg=12121.43, stdev=5894.04 00:12:10.598 clat percentiles (usec): 00:12:10.598 | 1.00th=[ 1811], 5.00th=[ 4686], 10.00th=[ 7177], 20.00th=[ 8979], 00:12:10.598 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11338], 60.00th=[11863], 00:12:10.598 | 70.00th=[12256], 80.00th=[12649], 90.00th=[16188], 95.00th=[24773], 00:12:10.598 | 99.00th=[41157], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:12:10.598 | 99.99th=[47449] 00:12:10.598 bw ( KiB/s): min=20480, max=20480, per=31.23%, avg=20480.00, stdev= 0.00, samples=1 00:12:10.598 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:10.598 lat (usec) : 500=0.01%, 750=0.15%, 1000=0.07% 00:12:10.598 lat (msec) : 2=0.33%, 4=1.29%, 10=15.77%, 20=78.14%, 50=4.25% 00:12:10.598 cpu : usr=6.20%, sys=9.10%, ctx=387, majf=0, minf=1 00:12:10.598 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:10.598 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.598 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.598 issued rwts: total=5120,5428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.598 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.598 job1: (groupid=0, jobs=1): err= 0: pid=2472653: Mon Dec 9 10:22:42 2024 00:12:10.598 read: IOPS=3226, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1006msec) 00:12:10.598 slat (usec): min=2, max=10989, avg=117.70, stdev=707.13 00:12:10.598 clat (usec): min=5024, max=39458, avg=14310.52, stdev=4111.83 00:12:10.599 lat (usec): min=5729, max=39473, avg=14428.22, stdev=4175.55 00:12:10.599 clat percentiles (usec): 00:12:10.599 | 1.00th=[ 5800], 5.00th=[ 8291], 10.00th=[ 9110], 20.00th=[11207], 00:12:10.599 | 30.00th=[12387], 40.00th=[13435], 50.00th=[14222], 60.00th=[15139], 00:12:10.599 | 70.00th=[15664], 80.00th=[16909], 90.00th=[18744], 95.00th=[21103], 00:12:10.599 | 99.00th=[26084], 99.50th=[32375], 99.90th=[39584], 99.95th=[39584], 00:12:10.599 | 99.99th=[39584] 00:12:10.599 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:12:10.599 slat (usec): min=4, max=10855, avg=158.81, stdev=794.95 00:12:10.599 clat (usec): min=6215, max=73363, avg=22549.68, stdev=15566.85 00:12:10.599 lat (usec): min=6226, max=73386, avg=22708.49, stdev=15674.37 00:12:10.599 clat percentiles (usec): 00:12:10.599 | 1.00th=[ 6325], 5.00th=[10421], 10.00th=[11338], 20.00th=[12649], 00:12:10.599 | 30.00th=[13173], 40.00th=[13698], 50.00th=[16450], 60.00th=[19006], 00:12:10.599 | 70.00th=[21365], 80.00th=[30278], 90.00th=[47449], 95.00th=[61604], 00:12:10.599 | 99.00th=[68682], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:12:10.599 | 99.99th=[72877] 00:12:10.599 bw ( KiB/s): min=12288, max=16384, per=21.86%, avg=14336.00, stdev=2896.31, samples=2 00:12:10.599 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:12:10.599 lat (msec) : 10=7.91%, 20=70.48%, 50=16.87%, 100=4.74% 00:12:10.599 cpu : usr=5.87%, sys=7.06%, ctx=350, majf=0, minf=1 00:12:10.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:10.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.599 issued rwts: total=3246,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.599 job2: (groupid=0, jobs=1): err= 0: pid=2472654: Mon Dec 9 10:22:42 2024 00:12:10.599 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:12:10.599 slat (usec): min=2, max=21440, avg=177.86, stdev=934.34 00:12:10.599 clat (usec): min=8346, max=63544, avg=22476.91, stdev=11734.58 00:12:10.599 lat (usec): min=8357, max=63559, avg=22654.77, stdev=11788.99 00:12:10.599 clat percentiles (usec): 00:12:10.599 | 1.00th=[ 8455], 5.00th=[11994], 10.00th=[12518], 20.00th=[13829], 00:12:10.599 | 30.00th=[15139], 40.00th=[15926], 50.00th=[17695], 60.00th=[21365], 00:12:10.599 | 70.00th=[23725], 80.00th=[28967], 90.00th=[41681], 95.00th=[51119], 00:12:10.599 | 99.00th=[60031], 99.50th=[62129], 99.90th=[62129], 99.95th=[63701], 00:12:10.599 | 99.99th=[63701] 00:12:10.599 write: IOPS=2695, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1006msec); 0 zone resets 00:12:10.599 slat (usec): min=3, max=15857, avg=194.34, stdev=1034.36 00:12:10.599 clat (usec): min=1581, max=55660, avg=25639.91, stdev=11606.43 00:12:10.599 lat (usec): min=6508, max=55690, avg=25834.25, stdev=11663.44 00:12:10.599 clat percentiles (usec): 00:12:10.599 | 1.00th=[ 8160], 5.00th=[11469], 10.00th=[13566], 20.00th=[15270], 00:12:10.599 | 30.00th=[18482], 40.00th=[20841], 50.00th=[22676], 60.00th=[25560], 00:12:10.599 | 70.00th=[30016], 80.00th=[35914], 90.00th=[44303], 95.00th=[50070], 00:12:10.599 | 99.00th=[55313], 99.50th=[55837], 99.90th=[55837], 99.95th=[55837], 00:12:10.599 | 99.99th=[55837] 00:12:10.599 bw ( KiB/s): min= 9600, max=11072, per=15.76%, avg=10336.00, stdev=1040.86, samples=2 00:12:10.599 iops : min= 2400, max= 2768, avg=2584.00, stdev=260.22, samples=2 00:12:10.599 lat (msec) : 2=0.02%, 10=2.16%, 20=44.52%, 50=48.08%, 100=5.22% 00:12:10.599 cpu : usr=2.09%, sys=4.28%, ctx=252, majf=0, minf=1 00:12:10.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:10.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.599 issued rwts: total=2560,2712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.599 job3: (groupid=0, jobs=1): err= 0: pid=2472655: Mon Dec 9 10:22:42 2024 00:12:10.599 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:12:10.599 slat (usec): min=2, max=16414, avg=91.62, stdev=702.99 00:12:10.599 clat (usec): min=1111, max=35781, avg=13342.57, stdev=4362.88 00:12:10.599 lat (usec): min=1145, max=35794, avg=13434.19, stdev=4408.24 00:12:10.599 clat percentiles (usec): 00:12:10.599 | 1.00th=[ 2442], 5.00th=[ 6390], 10.00th=[ 9634], 20.00th=[11731], 00:12:10.599 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:12:10.599 | 70.00th=[14222], 80.00th=[15139], 90.00th=[18482], 95.00th=[21103], 00:12:10.599 | 99.00th=[32113], 99.50th=[33817], 99.90th=[35914], 99.95th=[35914], 00:12:10.599 | 99.99th=[35914] 00:12:10.599 write: IOPS=4823, BW=18.8MiB/s (19.8MB/s)(19.1MiB/1013msec); 0 zone resets 00:12:10.599 slat (usec): min=3, max=14119, avg=95.79, stdev=666.21 00:12:10.599 clat (usec): min=329, max=68271, avg=13635.92, stdev=8955.99 00:12:10.599 lat (usec): min=337, max=68284, avg=13731.70, stdev=9021.05 00:12:10.599 clat percentiles (usec): 00:12:10.599 | 1.00th=[ 1549], 5.00th=[ 4015], 10.00th=[ 5997], 20.00th=[ 9372], 00:12:10.599 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11863], 60.00th=[12256], 00:12:10.599 | 70.00th=[12780], 80.00th=[13304], 90.00th=[25560], 95.00th=[32375], 00:12:10.599 | 99.00th=[54264], 99.50th=[55837], 99.90th=[68682], 99.95th=[68682], 00:12:10.599 | 99.99th=[68682] 00:12:10.599 bw ( KiB/s): min=17552, max=20520, per=29.02%, avg=19036.00, stdev=2098.69, samples=2 00:12:10.599 iops : min= 4388, max= 5130, avg=4759.00, stdev=524.67, samples=2 00:12:10.599 lat (usec) : 500=0.05%, 1000=0.06% 00:12:10.599 lat (msec) : 2=1.07%, 4=2.32%, 10=13.88%, 20=72.08%, 50=9.87% 00:12:10.599 lat (msec) : 100=0.66% 00:12:10.599 cpu : usr=5.14%, sys=6.42%, ctx=395, majf=0, minf=1 00:12:10.599 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:10.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:10.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:10.599 issued rwts: total=4608,4886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:10.599 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:10.599 00:12:10.599 Run status group 0 (all jobs): 00:12:10.599 READ: bw=59.9MiB/s (62.8MB/s), 9.94MiB/s-20.0MiB/s (10.4MB/s-20.9MB/s), io=60.7MiB (63.6MB), run=1001-1013msec 00:12:10.599 WRITE: bw=64.0MiB/s (67.2MB/s), 10.5MiB/s-21.2MiB/s (11.0MB/s-22.2MB/s), io=64.9MiB (68.0MB), run=1001-1013msec 00:12:10.599 00:12:10.599 Disk stats (read/write): 00:12:10.599 nvme0n1: ios=4124/4357, merge=0/0, ticks=27555/26840, in_queue=54395, util=97.80% 00:12:10.599 nvme0n2: ios=2594/2567, merge=0/0, ticks=18812/32563, in_queue=51375, util=98.56% 00:12:10.599 nvme0n3: ios=2104/2239, merge=0/0, ticks=16712/16091, in_queue=32803, util=98.60% 00:12:10.599 nvme0n4: ios=3986/4096, merge=0/0, ticks=46519/46060, in_queue=92579, util=100.00% 00:12:10.599 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:10.599 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2472791 00:12:10.599 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:10.599 10:22:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:10.599 [global] 00:12:10.599 thread=1 00:12:10.599 invalidate=1 00:12:10.599 rw=read 00:12:10.599 time_based=1 00:12:10.599 runtime=10 00:12:10.599 ioengine=libaio 00:12:10.599 direct=1 00:12:10.599 bs=4096 00:12:10.599 iodepth=1 00:12:10.599 norandommap=1 00:12:10.599 numjobs=1 00:12:10.599 00:12:10.599 [job0] 00:12:10.599 filename=/dev/nvme0n1 00:12:10.599 [job1] 00:12:10.599 filename=/dev/nvme0n2 00:12:10.599 [job2] 00:12:10.599 filename=/dev/nvme0n3 00:12:10.599 [job3] 00:12:10.599 filename=/dev/nvme0n4 00:12:10.599 Could not set queue depth (nvme0n1) 00:12:10.599 Could not set queue depth (nvme0n2) 00:12:10.599 Could not set queue depth (nvme0n3) 00:12:10.599 Could not set queue depth (nvme0n4) 00:12:10.857 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.857 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.857 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.857 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:10.857 fio-3.35 00:12:10.857 Starting 4 threads 00:12:14.148 10:22:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:14.148 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=36057088, buflen=4096 00:12:14.148 fio: pid=2473002, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:14.148 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:14.148 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:14.148 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:14.148 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=356352, buflen=4096 00:12:14.148 fio: pid=2473001, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:14.475 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:14.475 10:22:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:14.475 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=4599808, buflen=4096 00:12:14.475 fio: pid=2472999, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:14.752 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:14.752 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:14.752 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=44384256, buflen=4096 00:12:14.752 fio: pid=2473000, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:14.752 00:12:14.752 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2472999: Mon Dec 9 10:22:47 2024 00:12:14.752 read: IOPS=320, BW=1279KiB/s (1310kB/s)(4492KiB/3512msec) 00:12:14.752 slat (usec): min=5, max=5905, avg=18.77, stdev=229.22 00:12:14.752 clat (usec): min=175, max=42173, avg=3080.30, stdev=10368.07 00:12:14.752 lat (usec): min=180, max=47961, avg=3099.06, stdev=10409.44 00:12:14.752 clat percentiles (usec): 00:12:14.752 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 212], 00:12:14.752 | 30.00th=[ 225], 40.00th=[ 235], 50.00th=[ 249], 60.00th=[ 265], 00:12:14.752 | 70.00th=[ 277], 80.00th=[ 306], 90.00th=[ 482], 95.00th=[41157], 00:12:14.752 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:14.752 | 99.99th=[42206] 00:12:14.752 bw ( KiB/s): min= 96, max= 7264, per=6.37%, avg=1408.00, stdev=2871.62, samples=6 00:12:14.752 iops : min= 24, max= 1816, avg=352.00, stdev=717.90, samples=6 00:12:14.752 lat (usec) : 250=50.98%, 500=39.23%, 750=2.49%, 1000=0.09% 00:12:14.752 lat (msec) : 2=0.18%, 10=0.09%, 50=6.85% 00:12:14.752 cpu : usr=0.06%, sys=0.54%, ctx=1129, majf=0, minf=1 00:12:14.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.752 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.752 issued rwts: total=1124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.752 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2473000: Mon Dec 9 10:22:47 2024 00:12:14.752 read: IOPS=2872, BW=11.2MiB/s (11.8MB/s)(42.3MiB/3773msec) 00:12:14.752 slat (usec): min=5, max=28936, avg=15.71, stdev=293.68 00:12:14.752 clat (usec): min=176, max=42046, avg=326.96, stdev=1803.00 00:12:14.752 lat (usec): min=183, max=70014, avg=341.76, stdev=1883.14 00:12:14.752 clat percentiles (usec): 00:12:14.752 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 219], 00:12:14.752 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 251], 00:12:14.752 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 285], 95.00th=[ 310], 00:12:14.752 | 99.00th=[ 383], 99.50th=[ 412], 99.90th=[41157], 99.95th=[41157], 00:12:14.752 | 99.99th=[42206] 00:12:14.752 bw ( KiB/s): min= 93, max=16808, per=55.99%, avg=12376.71, stdev=5814.30, samples=7 00:12:14.752 iops : min= 23, max= 4202, avg=3094.14, stdev=1453.66, samples=7 00:12:14.752 lat (usec) : 250=59.42%, 500=40.27%, 750=0.09%, 1000=0.02% 00:12:14.752 lat (msec) : 50=0.19% 00:12:14.752 cpu : usr=1.99%, sys=5.78%, ctx=10841, majf=0, minf=2 00:12:14.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.752 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.752 issued rwts: total=10837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.752 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2473001: Mon Dec 9 10:22:47 2024 00:12:14.752 read: IOPS=27, BW=107KiB/s (110kB/s)(348KiB/3238msec) 00:12:14.752 slat (nsec): min=11464, max=33104, avg=21240.91, stdev=8235.39 00:12:14.752 clat (usec): min=341, max=42057, avg=36924.73, stdev=12173.48 00:12:14.752 lat (usec): min=362, max=42070, avg=36946.03, stdev=12174.60 00:12:14.752 clat percentiles (usec): 00:12:14.752 | 1.00th=[ 343], 5.00th=[ 429], 10.00th=[ 8979], 20.00th=[41157], 00:12:14.752 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:14.752 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:12:14.752 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:14.752 | 99.99th=[42206] 00:12:14.752 bw ( KiB/s): min= 96, max= 136, per=0.48%, avg=106.67, stdev=15.73, samples=6 00:12:14.752 iops : min= 24, max= 34, avg=26.67, stdev= 3.93, samples=6 00:12:14.752 lat (usec) : 500=7.95%, 750=1.14% 00:12:14.752 lat (msec) : 10=1.14%, 50=88.64% 00:12:14.752 cpu : usr=0.09%, sys=0.00%, ctx=88, majf=0, minf=2 00:12:14.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.752 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.752 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.752 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.752 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2473002: Mon Dec 9 10:22:47 2024 00:12:14.752 read: IOPS=3018, BW=11.8MiB/s (12.4MB/s)(34.4MiB/2917msec) 00:12:14.752 slat (nsec): min=5880, max=65865, avg=13314.10, stdev=5832.53 00:12:14.752 clat (usec): min=195, max=41129, avg=311.84, stdev=1503.63 00:12:14.752 lat (usec): min=203, max=41140, avg=325.16, stdev=1503.97 00:12:14.752 clat percentiles (usec): 00:12:14.752 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 227], 20.00th=[ 235], 00:12:14.752 | 30.00th=[ 241], 40.00th=[ 245], 50.00th=[ 247], 60.00th=[ 251], 00:12:14.752 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 310], 00:12:14.752 | 99.00th=[ 478], 99.50th=[ 545], 99.90th=[41157], 99.95th=[41157], 00:12:14.752 | 99.99th=[41157] 00:12:14.752 bw ( KiB/s): min= 1864, max=15552, per=53.10%, avg=11737.60, stdev=5744.21, samples=5 00:12:14.752 iops : min= 466, max= 3888, avg=2934.40, stdev=1436.05, samples=5 00:12:14.752 lat (usec) : 250=55.93%, 500=43.20%, 750=0.73% 00:12:14.752 lat (msec) : 50=0.14% 00:12:14.753 cpu : usr=2.43%, sys=6.34%, ctx=8804, majf=0, minf=2 00:12:14.753 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:14.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.753 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:14.753 issued rwts: total=8804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:14.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:14.753 00:12:14.753 Run status group 0 (all jobs): 00:12:14.753 READ: bw=21.6MiB/s (22.6MB/s), 107KiB/s-11.8MiB/s (110kB/s-12.4MB/s), io=81.4MiB (85.4MB), run=2917-3773msec 00:12:14.753 00:12:14.753 Disk stats (read/write): 00:12:14.753 nvme0n1: ios=1159/0, merge=0/0, ticks=3456/0, in_queue=3456, util=98.88% 00:12:14.753 nvme0n2: ios=10832/0, merge=0/0, ticks=3297/0, in_queue=3297, util=95.77% 00:12:14.753 nvme0n3: ios=84/0, merge=0/0, ticks=3092/0, in_queue=3092, util=96.79% 00:12:14.753 nvme0n4: ios=8634/0, merge=0/0, ticks=2615/0, in_queue=2615, util=96.71% 00:12:15.011 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.011 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:15.268 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.269 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:15.525 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.526 10:22:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:15.783 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:15.783 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:16.041 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:16.041 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2472791 00:12:16.041 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:16.041 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.299 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.299 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.299 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:16.299 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:16.299 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.299 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:16.299 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.299 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:16.299 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:16.299 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:16.299 nvmf hotplug test: fio failed as expected 00:12:16.299 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.557 rmmod nvme_tcp 00:12:16.557 rmmod nvme_fabrics 00:12:16.557 rmmod nvme_keyring 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2470817 ']' 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2470817 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2470817 ']' 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2470817 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2470817 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2470817' 00:12:16.557 killing process with pid 2470817 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2470817 00:12:16.557 10:22:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2470817 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.124 10:22:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.031 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.031 00:12:19.031 real 0m24.318s 00:12:19.031 user 1m24.652s 00:12:19.031 sys 0m7.696s 00:12:19.031 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.031 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.031 ************************************ 00:12:19.031 END TEST nvmf_fio_target 00:12:19.031 ************************************ 00:12:19.031 10:22:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:19.031 10:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.031 10:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.031 10:22:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:19.031 ************************************ 00:12:19.031 START TEST nvmf_bdevio 00:12:19.031 ************************************ 00:12:19.031 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:19.031 * Looking for test storage... 00:12:19.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.031 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.031 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.031 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.290 --rc genhtml_branch_coverage=1 00:12:19.290 --rc genhtml_function_coverage=1 00:12:19.290 --rc genhtml_legend=1 00:12:19.290 --rc geninfo_all_blocks=1 00:12:19.290 --rc geninfo_unexecuted_blocks=1 00:12:19.290 00:12:19.290 ' 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.290 --rc genhtml_branch_coverage=1 00:12:19.290 --rc genhtml_function_coverage=1 00:12:19.290 --rc genhtml_legend=1 00:12:19.290 --rc geninfo_all_blocks=1 00:12:19.290 --rc geninfo_unexecuted_blocks=1 00:12:19.290 00:12:19.290 ' 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.290 --rc genhtml_branch_coverage=1 00:12:19.290 --rc genhtml_function_coverage=1 00:12:19.290 --rc genhtml_legend=1 00:12:19.290 --rc geninfo_all_blocks=1 00:12:19.290 --rc geninfo_unexecuted_blocks=1 00:12:19.290 00:12:19.290 ' 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.290 --rc genhtml_branch_coverage=1 00:12:19.290 --rc genhtml_function_coverage=1 00:12:19.290 --rc genhtml_legend=1 00:12:19.290 --rc geninfo_all_blocks=1 00:12:19.290 --rc geninfo_unexecuted_blocks=1 00:12:19.290 00:12:19.290 ' 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.290 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.291 10:22:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.842 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:21.843 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:21.843 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:21.843 Found net devices under 0000:09:00.0: cvl_0_0 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:21.843 Found net devices under 0000:09:00.1: cvl_0_1 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:21.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:12:21.843 00:12:21.843 --- 10.0.0.2 ping statistics --- 00:12:21.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.843 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.843 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.843 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:12:21.843 00:12:21.843 --- 10.0.0.1 ping statistics --- 00:12:21.843 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.843 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2475656 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2475656 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2475656 ']' 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.843 10:22:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.843 [2024-12-09 10:22:53.921826] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:12:21.843 [2024-12-09 10:22:53.921913] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.843 [2024-12-09 10:22:53.993550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.844 [2024-12-09 10:22:54.048834] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.844 [2024-12-09 10:22:54.048891] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.844 [2024-12-09 10:22:54.048903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.844 [2024-12-09 10:22:54.048913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.844 [2024-12-09 10:22:54.048922] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.844 [2024-12-09 10:22:54.050625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:21.844 [2024-12-09 10:22:54.050688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:21.844 [2024-12-09 10:22:54.050796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:21.844 [2024-12-09 10:22:54.050798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.844 [2024-12-09 10:22:54.202412] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.844 Malloc0 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:21.844 [2024-12-09 10:22:54.272024] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:21.844 { 00:12:21.844 "params": { 00:12:21.844 "name": "Nvme$subsystem", 00:12:21.844 "trtype": "$TEST_TRANSPORT", 00:12:21.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:21.844 "adrfam": "ipv4", 00:12:21.844 "trsvcid": "$NVMF_PORT", 00:12:21.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:21.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:21.844 "hdgst": ${hdgst:-false}, 00:12:21.844 "ddgst": ${ddgst:-false} 00:12:21.844 }, 00:12:21.844 "method": "bdev_nvme_attach_controller" 00:12:21.844 } 00:12:21.844 EOF 00:12:21.844 )") 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:12:21.844 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:12:22.103 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:12:22.103 10:22:54 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:22.103 "params": { 00:12:22.103 "name": "Nvme1", 00:12:22.103 "trtype": "tcp", 00:12:22.103 "traddr": "10.0.0.2", 00:12:22.103 "adrfam": "ipv4", 00:12:22.103 "trsvcid": "4420", 00:12:22.103 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.103 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.103 "hdgst": false, 00:12:22.103 "ddgst": false 00:12:22.103 }, 00:12:22.103 "method": "bdev_nvme_attach_controller" 00:12:22.103 }' 00:12:22.103 [2024-12-09 10:22:54.324311] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:12:22.103 [2024-12-09 10:22:54.324388] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2475681 ] 00:12:22.103 [2024-12-09 10:22:54.397710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:22.103 [2024-12-09 10:22:54.463332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.103 [2024-12-09 10:22:54.463445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.103 [2024-12-09 10:22:54.463451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.361 I/O targets: 00:12:22.361 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:22.361 00:12:22.361 00:12:22.361 CUnit - A unit testing framework for C - Version 2.1-3 00:12:22.361 http://cunit.sourceforge.net/ 00:12:22.361 00:12:22.361 00:12:22.361 Suite: bdevio tests on: Nvme1n1 00:12:22.620 Test: blockdev write read block ...passed 00:12:22.620 Test: blockdev write zeroes read block ...passed 00:12:22.620 Test: blockdev write zeroes read no split ...passed 00:12:22.620 Test: blockdev write zeroes read split ...passed 00:12:22.620 Test: blockdev write zeroes read split partial ...passed 00:12:22.620 Test: blockdev reset ...[2024-12-09 10:22:54.933029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:22.620 [2024-12-09 10:22:54.933137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7178c0 (9): Bad file descriptor 00:12:22.620 [2024-12-09 10:22:54.984151] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:22.620 passed 00:12:22.620 Test: blockdev write read 8 blocks ...passed 00:12:22.620 Test: blockdev write read size > 128k ...passed 00:12:22.620 Test: blockdev write read invalid size ...passed 00:12:22.620 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:22.620 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:22.620 Test: blockdev write read max offset ...passed 00:12:22.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:22.884 Test: blockdev writev readv 8 blocks ...passed 00:12:22.884 Test: blockdev writev readv 30 x 1block ...passed 00:12:22.884 Test: blockdev writev readv block ...passed 00:12:22.884 Test: blockdev writev readv size > 128k ...passed 00:12:22.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:22.884 Test: blockdev comparev and writev ...[2024-12-09 10:22:55.195113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:22.884 [2024-12-09 10:22:55.195156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:22.884 [2024-12-09 10:22:55.195183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:22.884 [2024-12-09 10:22:55.195200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:22.884 [2024-12-09 10:22:55.195511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:22.884 [2024-12-09 10:22:55.195536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:22.884 [2024-12-09 10:22:55.195557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:22.884 [2024-12-09 10:22:55.195573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:22.884 [2024-12-09 10:22:55.195868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:22.884 [2024-12-09 10:22:55.195891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:22.884 [2024-12-09 10:22:55.195912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:22.884 [2024-12-09 10:22:55.195928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:22.884 [2024-12-09 10:22:55.196250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:22.884 [2024-12-09 10:22:55.196274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:22.884 [2024-12-09 10:22:55.196295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:22.884 [2024-12-09 10:22:55.196318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:22.884 passed 00:12:22.884 Test: blockdev nvme passthru rw ...passed 00:12:22.884 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:22:55.278364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:22.884 [2024-12-09 10:22:55.278391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:22.884 [2024-12-09 10:22:55.278529] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:22.884 [2024-12-09 10:22:55.278551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:22.884 [2024-12-09 10:22:55.278685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:22.884 [2024-12-09 10:22:55.278708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:22.884 [2024-12-09 10:22:55.278834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:22.884 [2024-12-09 10:22:55.278856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:22.884 passed 00:12:22.884 Test: blockdev nvme admin passthru ...passed 00:12:23.142 Test: blockdev copy ...passed 00:12:23.142 00:12:23.142 Run Summary: Type Total Ran Passed Failed Inactive 00:12:23.142 suites 1 1 n/a 0 0 00:12:23.142 tests 23 23 23 0 0 00:12:23.142 asserts 152 152 152 0 n/a 00:12:23.142 00:12:23.142 Elapsed time = 1.119 seconds 00:12:23.142 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.142 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.142 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.143 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.143 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:23.143 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:23.143 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:23.143 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:23.143 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:23.143 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:23.143 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:23.400 rmmod nvme_tcp 00:12:23.400 rmmod nvme_fabrics 00:12:23.400 rmmod nvme_keyring 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2475656 ']' 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2475656 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2475656 ']' 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2475656 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2475656 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2475656' 00:12:23.400 killing process with pid 2475656 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2475656 00:12:23.400 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2475656 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.658 10:22:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.201 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:26.201 00:12:26.201 real 0m6.683s 00:12:26.201 user 0m10.902s 00:12:26.201 sys 0m2.215s 00:12:26.201 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.201 10:22:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:26.201 ************************************ 00:12:26.201 END TEST nvmf_bdevio 00:12:26.201 ************************************ 00:12:26.201 10:22:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:26.201 00:12:26.201 real 3m57.105s 00:12:26.201 user 10m16.069s 00:12:26.201 sys 1m8.581s 00:12:26.201 10:22:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.201 10:22:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:26.201 ************************************ 00:12:26.201 END TEST nvmf_target_core 00:12:26.201 ************************************ 00:12:26.201 10:22:58 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:26.201 10:22:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.202 10:22:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.202 10:22:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:26.202 ************************************ 00:12:26.202 START TEST nvmf_target_extra 00:12:26.202 ************************************ 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:26.202 * Looking for test storage... 00:12:26.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:26.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.202 --rc genhtml_branch_coverage=1 00:12:26.202 --rc genhtml_function_coverage=1 00:12:26.202 --rc genhtml_legend=1 00:12:26.202 --rc geninfo_all_blocks=1 00:12:26.202 --rc geninfo_unexecuted_blocks=1 00:12:26.202 00:12:26.202 ' 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:26.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.202 --rc genhtml_branch_coverage=1 00:12:26.202 --rc genhtml_function_coverage=1 00:12:26.202 --rc genhtml_legend=1 00:12:26.202 --rc geninfo_all_blocks=1 00:12:26.202 --rc geninfo_unexecuted_blocks=1 00:12:26.202 00:12:26.202 ' 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:26.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.202 --rc genhtml_branch_coverage=1 00:12:26.202 --rc genhtml_function_coverage=1 00:12:26.202 --rc genhtml_legend=1 00:12:26.202 --rc geninfo_all_blocks=1 00:12:26.202 --rc geninfo_unexecuted_blocks=1 00:12:26.202 00:12:26.202 ' 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:26.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.202 --rc genhtml_branch_coverage=1 00:12:26.202 --rc genhtml_function_coverage=1 00:12:26.202 --rc genhtml_legend=1 00:12:26.202 --rc geninfo_all_blocks=1 00:12:26.202 --rc geninfo_unexecuted_blocks=1 00:12:26.202 00:12:26.202 ' 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.202 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:26.202 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.203 ************************************ 00:12:26.203 START TEST nvmf_example 00:12:26.203 ************************************ 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:26.203 * Looking for test storage... 00:12:26.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:26.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.203 --rc genhtml_branch_coverage=1 00:12:26.203 --rc genhtml_function_coverage=1 00:12:26.203 --rc genhtml_legend=1 00:12:26.203 --rc geninfo_all_blocks=1 00:12:26.203 --rc geninfo_unexecuted_blocks=1 00:12:26.203 00:12:26.203 ' 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:26.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.203 --rc genhtml_branch_coverage=1 00:12:26.203 --rc genhtml_function_coverage=1 00:12:26.203 --rc genhtml_legend=1 00:12:26.203 --rc geninfo_all_blocks=1 00:12:26.203 --rc geninfo_unexecuted_blocks=1 00:12:26.203 00:12:26.203 ' 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:26.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.203 --rc genhtml_branch_coverage=1 00:12:26.203 --rc genhtml_function_coverage=1 00:12:26.203 --rc genhtml_legend=1 00:12:26.203 --rc geninfo_all_blocks=1 00:12:26.203 --rc geninfo_unexecuted_blocks=1 00:12:26.203 00:12:26.203 ' 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:26.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.203 --rc genhtml_branch_coverage=1 00:12:26.203 --rc genhtml_function_coverage=1 00:12:26.203 --rc genhtml_legend=1 00:12:26.203 --rc geninfo_all_blocks=1 00:12:26.203 --rc geninfo_unexecuted_blocks=1 00:12:26.203 00:12:26.203 ' 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:26.203 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:26.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:26.204 10:22:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:28.734 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:28.734 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:28.734 Found net devices under 0000:09:00.0: cvl_0_0 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:28.734 Found net devices under 0000:09:00.1: cvl_0_1 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:28.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:12:28.734 00:12:28.734 --- 10.0.0.2 ping statistics --- 00:12:28.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.734 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:12:28.734 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:12:28.734 00:12:28.735 --- 10.0.0.1 ping statistics --- 00:12:28.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.735 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2477940 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2477940 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2477940 ']' 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.735 10:23:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:29.668 10:23:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:41.860 Initializing NVMe Controllers 00:12:41.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:41.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:41.860 Initialization complete. Launching workers. 00:12:41.860 ======================================================== 00:12:41.860 Latency(us) 00:12:41.860 Device Information : IOPS MiB/s Average min max 00:12:41.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14524.97 56.74 4406.03 901.25 15310.73 00:12:41.860 ======================================================== 00:12:41.860 Total : 14524.97 56.74 4406.03 901.25 15310.73 00:12:41.860 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:41.860 rmmod nvme_tcp 00:12:41.860 rmmod nvme_fabrics 00:12:41.860 rmmod nvme_keyring 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2477940 ']' 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2477940 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2477940 ']' 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2477940 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2477940 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2477940' 00:12:41.860 killing process with pid 2477940 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2477940 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2477940 00:12:41.860 nvmf threads initialize successfully 00:12:41.860 bdev subsystem init successfully 00:12:41.860 created a nvmf target service 00:12:41.860 create targets's poll groups done 00:12:41.860 all subsystems of target started 00:12:41.860 nvmf target is running 00:12:41.860 all subsystems of target stopped 00:12:41.860 destroy targets's poll groups done 00:12:41.860 destroyed the nvmf target service 00:12:41.860 bdev subsystem finish successfully 00:12:41.860 nvmf threads destroy successfully 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:12:41.860 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:12:41.861 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:41.861 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:12:41.861 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:41.861 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:41.861 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.861 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.861 10:23:12 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:42.427 00:12:42.427 real 0m16.354s 00:12:42.427 user 0m45.182s 00:12:42.427 sys 0m3.866s 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:42.427 ************************************ 00:12:42.427 END TEST nvmf_example 00:12:42.427 ************************************ 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.427 ************************************ 00:12:42.427 START TEST nvmf_filesystem 00:12:42.427 ************************************ 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:42.427 * Looking for test storage... 00:12:42.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:42.427 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:42.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.689 --rc genhtml_branch_coverage=1 00:12:42.689 --rc genhtml_function_coverage=1 00:12:42.689 --rc genhtml_legend=1 00:12:42.689 --rc geninfo_all_blocks=1 00:12:42.689 --rc geninfo_unexecuted_blocks=1 00:12:42.689 00:12:42.689 ' 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:42.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.689 --rc genhtml_branch_coverage=1 00:12:42.689 --rc genhtml_function_coverage=1 00:12:42.689 --rc genhtml_legend=1 00:12:42.689 --rc geninfo_all_blocks=1 00:12:42.689 --rc geninfo_unexecuted_blocks=1 00:12:42.689 00:12:42.689 ' 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:42.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.689 --rc genhtml_branch_coverage=1 00:12:42.689 --rc genhtml_function_coverage=1 00:12:42.689 --rc genhtml_legend=1 00:12:42.689 --rc geninfo_all_blocks=1 00:12:42.689 --rc geninfo_unexecuted_blocks=1 00:12:42.689 00:12:42.689 ' 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:42.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.689 --rc genhtml_branch_coverage=1 00:12:42.689 --rc genhtml_function_coverage=1 00:12:42.689 --rc genhtml_legend=1 00:12:42.689 --rc geninfo_all_blocks=1 00:12:42.689 --rc geninfo_unexecuted_blocks=1 00:12:42.689 00:12:42.689 ' 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:42.689 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:42.690 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:42.691 #define SPDK_CONFIG_H 00:12:42.691 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:42.691 #define SPDK_CONFIG_APPS 1 00:12:42.691 #define SPDK_CONFIG_ARCH native 00:12:42.691 #undef SPDK_CONFIG_ASAN 00:12:42.691 #undef SPDK_CONFIG_AVAHI 00:12:42.691 #undef SPDK_CONFIG_CET 00:12:42.691 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:42.691 #define SPDK_CONFIG_COVERAGE 1 00:12:42.691 #define SPDK_CONFIG_CROSS_PREFIX 00:12:42.691 #undef SPDK_CONFIG_CRYPTO 00:12:42.691 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:42.691 #undef SPDK_CONFIG_CUSTOMOCF 00:12:42.691 #undef SPDK_CONFIG_DAOS 00:12:42.691 #define SPDK_CONFIG_DAOS_DIR 00:12:42.691 #define SPDK_CONFIG_DEBUG 1 00:12:42.691 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:42.691 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:42.691 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:42.691 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:42.691 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:42.691 #undef SPDK_CONFIG_DPDK_UADK 00:12:42.691 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:42.691 #define SPDK_CONFIG_EXAMPLES 1 00:12:42.691 #undef SPDK_CONFIG_FC 00:12:42.691 #define SPDK_CONFIG_FC_PATH 00:12:42.691 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:42.691 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:42.691 #define SPDK_CONFIG_FSDEV 1 00:12:42.691 #undef SPDK_CONFIG_FUSE 00:12:42.691 #undef SPDK_CONFIG_FUZZER 00:12:42.691 #define SPDK_CONFIG_FUZZER_LIB 00:12:42.691 #undef SPDK_CONFIG_GOLANG 00:12:42.691 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:42.691 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:42.691 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:42.691 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:42.691 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:42.691 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:42.691 #undef SPDK_CONFIG_HAVE_LZ4 00:12:42.691 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:42.691 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:42.691 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:42.691 #define SPDK_CONFIG_IDXD 1 00:12:42.691 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:42.691 #undef SPDK_CONFIG_IPSEC_MB 00:12:42.691 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:42.691 #define SPDK_CONFIG_ISAL 1 00:12:42.691 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:42.691 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:42.691 #define SPDK_CONFIG_LIBDIR 00:12:42.691 #undef SPDK_CONFIG_LTO 00:12:42.691 #define SPDK_CONFIG_MAX_LCORES 128 00:12:42.691 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:42.691 #define SPDK_CONFIG_NVME_CUSE 1 00:12:42.691 #undef SPDK_CONFIG_OCF 00:12:42.691 #define SPDK_CONFIG_OCF_PATH 00:12:42.691 #define SPDK_CONFIG_OPENSSL_PATH 00:12:42.691 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:42.691 #define SPDK_CONFIG_PGO_DIR 00:12:42.691 #undef SPDK_CONFIG_PGO_USE 00:12:42.691 #define SPDK_CONFIG_PREFIX /usr/local 00:12:42.691 #undef SPDK_CONFIG_RAID5F 00:12:42.691 #undef SPDK_CONFIG_RBD 00:12:42.691 #define SPDK_CONFIG_RDMA 1 00:12:42.691 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:42.691 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:42.691 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:42.691 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:42.691 #define SPDK_CONFIG_SHARED 1 00:12:42.691 #undef SPDK_CONFIG_SMA 00:12:42.691 #define SPDK_CONFIG_TESTS 1 00:12:42.691 #undef SPDK_CONFIG_TSAN 00:12:42.691 #define SPDK_CONFIG_UBLK 1 00:12:42.691 #define SPDK_CONFIG_UBSAN 1 00:12:42.691 #undef SPDK_CONFIG_UNIT_TESTS 00:12:42.691 #undef SPDK_CONFIG_URING 00:12:42.691 #define SPDK_CONFIG_URING_PATH 00:12:42.691 #undef SPDK_CONFIG_URING_ZNS 00:12:42.691 #undef SPDK_CONFIG_USDT 00:12:42.691 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:42.691 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:42.691 #define SPDK_CONFIG_VFIO_USER 1 00:12:42.691 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:42.691 #define SPDK_CONFIG_VHOST 1 00:12:42.691 #define SPDK_CONFIG_VIRTIO 1 00:12:42.691 #undef SPDK_CONFIG_VTUNE 00:12:42.691 #define SPDK_CONFIG_VTUNE_DIR 00:12:42.691 #define SPDK_CONFIG_WERROR 1 00:12:42.691 #define SPDK_CONFIG_WPDK_DIR 00:12:42.691 #undef SPDK_CONFIG_XNVME 00:12:42.691 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.691 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:42.692 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.693 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:42.694 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2479649 ]] 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2479649 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.fcxRDY 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.fcxRDY/tests/target /tmp/spdk.fcxRDY 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=51625922560 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988503552 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10362580992 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30982885376 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994251776 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375261184 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397703168 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22441984 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=29919997952 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994251776 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=1074253824 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:42.695 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:42.696 * Looking for test storage... 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=51625922560 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=12577173504 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:42.696 10:23:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:42.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.696 --rc genhtml_branch_coverage=1 00:12:42.696 --rc genhtml_function_coverage=1 00:12:42.696 --rc genhtml_legend=1 00:12:42.696 --rc geninfo_all_blocks=1 00:12:42.696 --rc geninfo_unexecuted_blocks=1 00:12:42.696 00:12:42.696 ' 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:42.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.696 --rc genhtml_branch_coverage=1 00:12:42.696 --rc genhtml_function_coverage=1 00:12:42.696 --rc genhtml_legend=1 00:12:42.696 --rc geninfo_all_blocks=1 00:12:42.696 --rc geninfo_unexecuted_blocks=1 00:12:42.696 00:12:42.696 ' 00:12:42.696 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:42.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.696 --rc genhtml_branch_coverage=1 00:12:42.696 --rc genhtml_function_coverage=1 00:12:42.696 --rc genhtml_legend=1 00:12:42.696 --rc geninfo_all_blocks=1 00:12:42.696 --rc geninfo_unexecuted_blocks=1 00:12:42.696 00:12:42.696 ' 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:42.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.697 --rc genhtml_branch_coverage=1 00:12:42.697 --rc genhtml_function_coverage=1 00:12:42.697 --rc genhtml_legend=1 00:12:42.697 --rc geninfo_all_blocks=1 00:12:42.697 --rc geninfo_unexecuted_blocks=1 00:12:42.697 00:12:42.697 ' 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:42.697 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:42.697 10:23:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:45.239 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:45.239 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:45.239 Found net devices under 0000:09:00.0: cvl_0_0 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:45.239 Found net devices under 0000:09:00.1: cvl_0_1 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.239 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:45.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:12:45.240 00:12:45.240 --- 10.0.0.2 ping statistics --- 00:12:45.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.240 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:12:45.240 00:12:45.240 --- 10.0.0.1 ping statistics --- 00:12:45.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.240 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:45.240 ************************************ 00:12:45.240 START TEST nvmf_filesystem_no_in_capsule 00:12:45.240 ************************************ 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2481411 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2481411 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2481411 ']' 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:45.240 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.240 [2024-12-09 10:23:17.568778] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:12:45.240 [2024-12-09 10:23:17.568865] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.240 [2024-12-09 10:23:17.637862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.498 [2024-12-09 10:23:17.695335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.498 [2024-12-09 10:23:17.695399] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.498 [2024-12-09 10:23:17.695412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.498 [2024-12-09 10:23:17.695423] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.498 [2024-12-09 10:23:17.695432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.498 [2024-12-09 10:23:17.696898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.498 [2024-12-09 10:23:17.697005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.498 [2024-12-09 10:23:17.697080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.498 [2024-12-09 10:23:17.697083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.498 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.498 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:45.498 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:45.498 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:45.498 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.498 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.498 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:45.498 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:45.498 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.498 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.499 [2024-12-09 10:23:17.851363] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.499 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.499 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:45.499 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.499 10:23:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.757 Malloc1 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.757 [2024-12-09 10:23:18.051968] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:45.757 { 00:12:45.757 "name": "Malloc1", 00:12:45.757 "aliases": [ 00:12:45.757 "223d6e9d-177d-4f7e-82ec-67a6a9a8752a" 00:12:45.757 ], 00:12:45.757 "product_name": "Malloc disk", 00:12:45.757 "block_size": 512, 00:12:45.757 "num_blocks": 1048576, 00:12:45.757 "uuid": "223d6e9d-177d-4f7e-82ec-67a6a9a8752a", 00:12:45.757 "assigned_rate_limits": { 00:12:45.757 "rw_ios_per_sec": 0, 00:12:45.757 "rw_mbytes_per_sec": 0, 00:12:45.757 "r_mbytes_per_sec": 0, 00:12:45.757 "w_mbytes_per_sec": 0 00:12:45.757 }, 00:12:45.757 "claimed": true, 00:12:45.757 "claim_type": "exclusive_write", 00:12:45.757 "zoned": false, 00:12:45.757 "supported_io_types": { 00:12:45.757 "read": true, 00:12:45.757 "write": true, 00:12:45.757 "unmap": true, 00:12:45.757 "flush": true, 00:12:45.757 "reset": true, 00:12:45.757 "nvme_admin": false, 00:12:45.757 "nvme_io": false, 00:12:45.757 "nvme_io_md": false, 00:12:45.757 "write_zeroes": true, 00:12:45.757 "zcopy": true, 00:12:45.757 "get_zone_info": false, 00:12:45.757 "zone_management": false, 00:12:45.757 "zone_append": false, 00:12:45.757 "compare": false, 00:12:45.757 "compare_and_write": false, 00:12:45.757 "abort": true, 00:12:45.757 "seek_hole": false, 00:12:45.757 "seek_data": false, 00:12:45.757 "copy": true, 00:12:45.757 "nvme_iov_md": false 00:12:45.757 }, 00:12:45.757 "memory_domains": [ 00:12:45.757 { 00:12:45.757 "dma_device_id": "system", 00:12:45.757 "dma_device_type": 1 00:12:45.757 }, 00:12:45.757 { 00:12:45.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:45.757 "dma_device_type": 2 00:12:45.757 } 00:12:45.757 ], 00:12:45.757 "driver_specific": {} 00:12:45.757 } 00:12:45.757 ]' 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:45.757 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:45.758 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:45.758 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.691 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.691 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:46.691 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.691 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:46.691 10:23:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:48.587 10:23:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:48.845 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:49.410 10:23:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:50.780 ************************************ 00:12:50.780 START TEST filesystem_ext4 00:12:50.780 ************************************ 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:50.780 10:23:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:50.780 mke2fs 1.47.0 (5-Feb-2023) 00:12:50.780 Discarding device blocks: 0/522240 done 00:12:50.781 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:50.781 Filesystem UUID: 0b9fc575-a991-4a4d-b6a0-49064d20c4ea 00:12:50.781 Superblock backups stored on blocks: 00:12:50.781 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:50.781 00:12:50.781 Allocating group tables: 0/64 done 00:12:50.781 Writing inode tables: 0/64 done 00:12:50.781 Creating journal (8192 blocks): done 00:12:50.781 Writing superblocks and filesystem accounting information: 0/64 done 00:12:50.781 00:12:50.781 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:50.781 10:23:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2481411 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:57.381 00:12:57.381 real 0m6.278s 00:12:57.381 user 0m0.017s 00:12:57.381 sys 0m0.071s 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:57.381 ************************************ 00:12:57.381 END TEST filesystem_ext4 00:12:57.381 ************************************ 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.381 ************************************ 00:12:57.381 START TEST filesystem_btrfs 00:12:57.381 ************************************ 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:57.381 btrfs-progs v6.8.1 00:12:57.381 See https://btrfs.readthedocs.io for more information. 00:12:57.381 00:12:57.381 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:57.381 NOTE: several default settings have changed in version 5.15, please make sure 00:12:57.381 this does not affect your deployments: 00:12:57.381 - DUP for metadata (-m dup) 00:12:57.381 - enabled no-holes (-O no-holes) 00:12:57.381 - enabled free-space-tree (-R free-space-tree) 00:12:57.381 00:12:57.381 Label: (null) 00:12:57.381 UUID: 746f53b3-7360-40d1-9aac-3de7cc5e510f 00:12:57.381 Node size: 16384 00:12:57.381 Sector size: 4096 (CPU page size: 4096) 00:12:57.381 Filesystem size: 510.00MiB 00:12:57.381 Block group profiles: 00:12:57.381 Data: single 8.00MiB 00:12:57.381 Metadata: DUP 32.00MiB 00:12:57.381 System: DUP 8.00MiB 00:12:57.381 SSD detected: yes 00:12:57.381 Zoned device: no 00:12:57.381 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:57.381 Checksum: crc32c 00:12:57.381 Number of devices: 1 00:12:57.381 Devices: 00:12:57.381 ID SIZE PATH 00:12:57.381 1 510.00MiB /dev/nvme0n1p1 00:12:57.381 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:57.381 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2481411 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:57.640 00:12:57.640 real 0m0.739s 00:12:57.640 user 0m0.018s 00:12:57.640 sys 0m0.097s 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:57.640 ************************************ 00:12:57.640 END TEST filesystem_btrfs 00:12:57.640 ************************************ 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:57.640 ************************************ 00:12:57.640 START TEST filesystem_xfs 00:12:57.640 ************************************ 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:57.640 10:23:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:57.640 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:57.640 = sectsz=512 attr=2, projid32bit=1 00:12:57.640 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:57.640 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:57.640 data = bsize=4096 blocks=130560, imaxpct=25 00:12:57.640 = sunit=0 swidth=0 blks 00:12:57.640 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:57.640 log =internal log bsize=4096 blocks=16384, version=2 00:12:57.640 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:57.640 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:58.574 Discarding blocks...Done. 00:12:58.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:58.574 10:23:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2481411 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:00.504 00:13:00.504 real 0m2.652s 00:13:00.504 user 0m0.016s 00:13:00.504 sys 0m0.056s 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:00.504 ************************************ 00:13:00.504 END TEST filesystem_xfs 00:13:00.504 ************************************ 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2481411 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2481411 ']' 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2481411 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2481411 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2481411' 00:13:00.504 killing process with pid 2481411 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2481411 00:13:00.504 10:23:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2481411 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:01.069 00:13:01.069 real 0m15.785s 00:13:01.069 user 1m1.001s 00:13:01.069 sys 0m1.965s 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.069 ************************************ 00:13:01.069 END TEST nvmf_filesystem_no_in_capsule 00:13:01.069 ************************************ 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:01.069 ************************************ 00:13:01.069 START TEST nvmf_filesystem_in_capsule 00:13:01.069 ************************************ 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2483516 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2483516 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2483516 ']' 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.069 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.069 [2024-12-09 10:23:33.406112] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:13:01.069 [2024-12-09 10:23:33.406206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.069 [2024-12-09 10:23:33.477697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.328 [2024-12-09 10:23:33.533095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.328 [2024-12-09 10:23:33.533153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.328 [2024-12-09 10:23:33.533182] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.328 [2024-12-09 10:23:33.533193] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.328 [2024-12-09 10:23:33.533203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.328 [2024-12-09 10:23:33.534625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.328 [2024-12-09 10:23:33.534703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.328 [2024-12-09 10:23:33.534815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.328 [2024-12-09 10:23:33.534824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.328 [2024-12-09 10:23:33.683063] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.328 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.586 Malloc1 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.586 [2024-12-09 10:23:33.877749] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:01.586 { 00:13:01.586 "name": "Malloc1", 00:13:01.586 "aliases": [ 00:13:01.586 "18e4bb38-f470-4d0b-9ee0-4e4fa01e8cf9" 00:13:01.586 ], 00:13:01.586 "product_name": "Malloc disk", 00:13:01.586 "block_size": 512, 00:13:01.586 "num_blocks": 1048576, 00:13:01.586 "uuid": "18e4bb38-f470-4d0b-9ee0-4e4fa01e8cf9", 00:13:01.586 "assigned_rate_limits": { 00:13:01.586 "rw_ios_per_sec": 0, 00:13:01.586 "rw_mbytes_per_sec": 0, 00:13:01.586 "r_mbytes_per_sec": 0, 00:13:01.586 "w_mbytes_per_sec": 0 00:13:01.586 }, 00:13:01.586 "claimed": true, 00:13:01.586 "claim_type": "exclusive_write", 00:13:01.586 "zoned": false, 00:13:01.586 "supported_io_types": { 00:13:01.586 "read": true, 00:13:01.586 "write": true, 00:13:01.586 "unmap": true, 00:13:01.586 "flush": true, 00:13:01.586 "reset": true, 00:13:01.586 "nvme_admin": false, 00:13:01.586 "nvme_io": false, 00:13:01.586 "nvme_io_md": false, 00:13:01.586 "write_zeroes": true, 00:13:01.586 "zcopy": true, 00:13:01.586 "get_zone_info": false, 00:13:01.586 "zone_management": false, 00:13:01.586 "zone_append": false, 00:13:01.586 "compare": false, 00:13:01.586 "compare_and_write": false, 00:13:01.586 "abort": true, 00:13:01.586 "seek_hole": false, 00:13:01.586 "seek_data": false, 00:13:01.586 "copy": true, 00:13:01.586 "nvme_iov_md": false 00:13:01.586 }, 00:13:01.586 "memory_domains": [ 00:13:01.586 { 00:13:01.586 "dma_device_id": "system", 00:13:01.586 "dma_device_type": 1 00:13:01.586 }, 00:13:01.586 { 00:13:01.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.586 "dma_device_type": 2 00:13:01.586 } 00:13:01.586 ], 00:13:01.586 "driver_specific": {} 00:13:01.586 } 00:13:01.586 ]' 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:01.586 10:23:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:02.519 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:02.519 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:02.519 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.519 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:02.519 10:23:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:04.424 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:04.681 10:23:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:05.247 10:23:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:06.180 ************************************ 00:13:06.180 START TEST filesystem_in_capsule_ext4 00:13:06.180 ************************************ 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:06.180 10:23:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:06.180 mke2fs 1.47.0 (5-Feb-2023) 00:13:06.438 Discarding device blocks: 0/522240 done 00:13:06.438 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:06.438 Filesystem UUID: ded30364-6842-4f9d-b7a8-c80d7e62d4c1 00:13:06.438 Superblock backups stored on blocks: 00:13:06.438 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:06.438 00:13:06.438 Allocating group tables: 0/64 done 00:13:06.438 Writing inode tables: 0/64 done 00:13:06.438 Creating journal (8192 blocks): done 00:13:08.736 Writing superblocks and filesystem accounting information: 0/64 done 00:13:08.736 00:13:08.736 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:08.736 10:23:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2483516 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:13.991 00:13:13.991 real 0m7.806s 00:13:13.991 user 0m0.016s 00:13:13.991 sys 0m0.074s 00:13:13.991 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:13.992 ************************************ 00:13:13.992 END TEST filesystem_in_capsule_ext4 00:13:13.992 ************************************ 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:13.992 ************************************ 00:13:13.992 START TEST filesystem_in_capsule_btrfs 00:13:13.992 ************************************ 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:13.992 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:14.249 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:14.249 btrfs-progs v6.8.1 00:13:14.249 See https://btrfs.readthedocs.io for more information. 00:13:14.249 00:13:14.249 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:14.249 NOTE: several default settings have changed in version 5.15, please make sure 00:13:14.249 this does not affect your deployments: 00:13:14.249 - DUP for metadata (-m dup) 00:13:14.249 - enabled no-holes (-O no-holes) 00:13:14.249 - enabled free-space-tree (-R free-space-tree) 00:13:14.249 00:13:14.249 Label: (null) 00:13:14.249 UUID: e67b3293-c9d9-40c3-9365-61d5ca51fc9c 00:13:14.249 Node size: 16384 00:13:14.249 Sector size: 4096 (CPU page size: 4096) 00:13:14.249 Filesystem size: 510.00MiB 00:13:14.249 Block group profiles: 00:13:14.249 Data: single 8.00MiB 00:13:14.249 Metadata: DUP 32.00MiB 00:13:14.249 System: DUP 8.00MiB 00:13:14.249 SSD detected: yes 00:13:14.249 Zoned device: no 00:13:14.249 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:14.249 Checksum: crc32c 00:13:14.249 Number of devices: 1 00:13:14.249 Devices: 00:13:14.249 ID SIZE PATH 00:13:14.249 1 510.00MiB /dev/nvme0n1p1 00:13:14.249 00:13:14.249 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:14.249 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2483516 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:14.507 00:13:14.507 real 0m0.493s 00:13:14.507 user 0m0.016s 00:13:14.507 sys 0m0.097s 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:14.507 ************************************ 00:13:14.507 END TEST filesystem_in_capsule_btrfs 00:13:14.507 ************************************ 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.507 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:14.765 ************************************ 00:13:14.765 START TEST filesystem_in_capsule_xfs 00:13:14.765 ************************************ 00:13:14.765 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:14.765 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:14.765 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:14.765 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:14.765 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:14.765 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:14.765 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:14.765 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:14.765 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:14.765 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:14.765 10:23:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:14.765 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:14.765 = sectsz=512 attr=2, projid32bit=1 00:13:14.765 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:14.765 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:14.765 data = bsize=4096 blocks=130560, imaxpct=25 00:13:14.765 = sunit=0 swidth=0 blks 00:13:14.765 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:14.765 log =internal log bsize=4096 blocks=16384, version=2 00:13:14.765 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:14.765 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:15.698 Discarding blocks...Done. 00:13:15.698 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:15.698 10:23:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2483516 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:17.593 00:13:17.593 real 0m2.656s 00:13:17.593 user 0m0.020s 00:13:17.593 sys 0m0.048s 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:17.593 ************************************ 00:13:17.593 END TEST filesystem_in_capsule_xfs 00:13:17.593 ************************************ 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:17.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2483516 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2483516 ']' 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2483516 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2483516 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2483516' 00:13:17.593 killing process with pid 2483516 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2483516 00:13:17.593 10:23:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2483516 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:18.159 00:13:18.159 real 0m17.031s 00:13:18.159 user 1m5.852s 00:13:18.159 sys 0m2.066s 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:18.159 ************************************ 00:13:18.159 END TEST nvmf_filesystem_in_capsule 00:13:18.159 ************************************ 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:18.159 rmmod nvme_tcp 00:13:18.159 rmmod nvme_fabrics 00:13:18.159 rmmod nvme_keyring 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.159 10:23:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.698 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:20.699 00:13:20.699 real 0m37.788s 00:13:20.699 user 2m8.024s 00:13:20.699 sys 0m5.851s 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:20.699 ************************************ 00:13:20.699 END TEST nvmf_filesystem 00:13:20.699 ************************************ 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.699 ************************************ 00:13:20.699 START TEST nvmf_target_discovery 00:13:20.699 ************************************ 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:20.699 * Looking for test storage... 00:13:20.699 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:20.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.699 --rc genhtml_branch_coverage=1 00:13:20.699 --rc genhtml_function_coverage=1 00:13:20.699 --rc genhtml_legend=1 00:13:20.699 --rc geninfo_all_blocks=1 00:13:20.699 --rc geninfo_unexecuted_blocks=1 00:13:20.699 00:13:20.699 ' 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:20.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.699 --rc genhtml_branch_coverage=1 00:13:20.699 --rc genhtml_function_coverage=1 00:13:20.699 --rc genhtml_legend=1 00:13:20.699 --rc geninfo_all_blocks=1 00:13:20.699 --rc geninfo_unexecuted_blocks=1 00:13:20.699 00:13:20.699 ' 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:20.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.699 --rc genhtml_branch_coverage=1 00:13:20.699 --rc genhtml_function_coverage=1 00:13:20.699 --rc genhtml_legend=1 00:13:20.699 --rc geninfo_all_blocks=1 00:13:20.699 --rc geninfo_unexecuted_blocks=1 00:13:20.699 00:13:20.699 ' 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:20.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.699 --rc genhtml_branch_coverage=1 00:13:20.699 --rc genhtml_function_coverage=1 00:13:20.699 --rc genhtml_legend=1 00:13:20.699 --rc geninfo_all_blocks=1 00:13:20.699 --rc geninfo_unexecuted_blocks=1 00:13:20.699 00:13:20.699 ' 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.699 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:20.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:20.700 10:23:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:22.606 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:22.606 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:22.606 Found net devices under 0000:09:00.0: cvl_0_0 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:22.606 Found net devices under 0000:09:00.1: cvl_0_1 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.606 10:23:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.606 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.606 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.606 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:22.606 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:22.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:13:22.864 00:13:22.864 --- 10.0.0.2 ping statistics --- 00:13:22.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.864 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:13:22.864 00:13:22.864 --- 10.0.0.1 ping statistics --- 00:13:22.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.864 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2487675 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2487675 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2487675 ']' 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.864 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:22.864 [2024-12-09 10:23:55.179278] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:13:22.864 [2024-12-09 10:23:55.179360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.864 [2024-12-09 10:23:55.249395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.864 [2024-12-09 10:23:55.304329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.864 [2024-12-09 10:23:55.304382] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.864 [2024-12-09 10:23:55.304397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.864 [2024-12-09 10:23:55.304409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.864 [2024-12-09 10:23:55.304420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.121 [2024-12-09 10:23:55.306130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.121 [2024-12-09 10:23:55.306225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.121 [2024-12-09 10:23:55.306196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.121 [2024-12-09 10:23:55.306228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.121 [2024-12-09 10:23:55.459917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.121 Null1 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.121 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 [2024-12-09 10:23:55.517346] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 Null2 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.122 Null3 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.122 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.380 Null4 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.380 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:13:23.638 00:13:23.638 Discovery Log Number of Records 6, Generation counter 6 00:13:23.639 =====Discovery Log Entry 0====== 00:13:23.639 trtype: tcp 00:13:23.639 adrfam: ipv4 00:13:23.639 subtype: current discovery subsystem 00:13:23.639 treq: not required 00:13:23.639 portid: 0 00:13:23.639 trsvcid: 4420 00:13:23.639 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:23.639 traddr: 10.0.0.2 00:13:23.639 eflags: explicit discovery connections, duplicate discovery information 00:13:23.639 sectype: none 00:13:23.639 =====Discovery Log Entry 1====== 00:13:23.639 trtype: tcp 00:13:23.639 adrfam: ipv4 00:13:23.639 subtype: nvme subsystem 00:13:23.639 treq: not required 00:13:23.639 portid: 0 00:13:23.639 trsvcid: 4420 00:13:23.639 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:23.639 traddr: 10.0.0.2 00:13:23.639 eflags: none 00:13:23.639 sectype: none 00:13:23.639 =====Discovery Log Entry 2====== 00:13:23.639 trtype: tcp 00:13:23.639 adrfam: ipv4 00:13:23.639 subtype: nvme subsystem 00:13:23.639 treq: not required 00:13:23.639 portid: 0 00:13:23.639 trsvcid: 4420 00:13:23.639 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:23.639 traddr: 10.0.0.2 00:13:23.639 eflags: none 00:13:23.639 sectype: none 00:13:23.639 =====Discovery Log Entry 3====== 00:13:23.639 trtype: tcp 00:13:23.639 adrfam: ipv4 00:13:23.639 subtype: nvme subsystem 00:13:23.639 treq: not required 00:13:23.639 portid: 0 00:13:23.639 trsvcid: 4420 00:13:23.639 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:23.639 traddr: 10.0.0.2 00:13:23.639 eflags: none 00:13:23.639 sectype: none 00:13:23.639 =====Discovery Log Entry 4====== 00:13:23.639 trtype: tcp 00:13:23.639 adrfam: ipv4 00:13:23.639 subtype: nvme subsystem 00:13:23.639 treq: not required 00:13:23.639 portid: 0 00:13:23.639 trsvcid: 4420 00:13:23.639 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:23.639 traddr: 10.0.0.2 00:13:23.639 eflags: none 00:13:23.639 sectype: none 00:13:23.639 =====Discovery Log Entry 5====== 00:13:23.639 trtype: tcp 00:13:23.639 adrfam: ipv4 00:13:23.639 subtype: discovery subsystem referral 00:13:23.639 treq: not required 00:13:23.639 portid: 0 00:13:23.639 trsvcid: 4430 00:13:23.639 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:23.639 traddr: 10.0.0.2 00:13:23.639 eflags: none 00:13:23.639 sectype: none 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:23.639 Perform nvmf subsystem discovery via RPC 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.639 [ 00:13:23.639 { 00:13:23.639 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:23.639 "subtype": "Discovery", 00:13:23.639 "listen_addresses": [ 00:13:23.639 { 00:13:23.639 "trtype": "TCP", 00:13:23.639 "adrfam": "IPv4", 00:13:23.639 "traddr": "10.0.0.2", 00:13:23.639 "trsvcid": "4420" 00:13:23.639 } 00:13:23.639 ], 00:13:23.639 "allow_any_host": true, 00:13:23.639 "hosts": [] 00:13:23.639 }, 00:13:23.639 { 00:13:23.639 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:23.639 "subtype": "NVMe", 00:13:23.639 "listen_addresses": [ 00:13:23.639 { 00:13:23.639 "trtype": "TCP", 00:13:23.639 "adrfam": "IPv4", 00:13:23.639 "traddr": "10.0.0.2", 00:13:23.639 "trsvcid": "4420" 00:13:23.639 } 00:13:23.639 ], 00:13:23.639 "allow_any_host": true, 00:13:23.639 "hosts": [], 00:13:23.639 "serial_number": "SPDK00000000000001", 00:13:23.639 "model_number": "SPDK bdev Controller", 00:13:23.639 "max_namespaces": 32, 00:13:23.639 "min_cntlid": 1, 00:13:23.639 "max_cntlid": 65519, 00:13:23.639 "namespaces": [ 00:13:23.639 { 00:13:23.639 "nsid": 1, 00:13:23.639 "bdev_name": "Null1", 00:13:23.639 "name": "Null1", 00:13:23.639 "nguid": "0C81010068C94A8486E15883755E7BAB", 00:13:23.639 "uuid": "0c810100-68c9-4a84-86e1-5883755e7bab" 00:13:23.639 } 00:13:23.639 ] 00:13:23.639 }, 00:13:23.639 { 00:13:23.639 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:23.639 "subtype": "NVMe", 00:13:23.639 "listen_addresses": [ 00:13:23.639 { 00:13:23.639 "trtype": "TCP", 00:13:23.639 "adrfam": "IPv4", 00:13:23.639 "traddr": "10.0.0.2", 00:13:23.639 "trsvcid": "4420" 00:13:23.639 } 00:13:23.639 ], 00:13:23.639 "allow_any_host": true, 00:13:23.639 "hosts": [], 00:13:23.639 "serial_number": "SPDK00000000000002", 00:13:23.639 "model_number": "SPDK bdev Controller", 00:13:23.639 "max_namespaces": 32, 00:13:23.639 "min_cntlid": 1, 00:13:23.639 "max_cntlid": 65519, 00:13:23.639 "namespaces": [ 00:13:23.639 { 00:13:23.639 "nsid": 1, 00:13:23.639 "bdev_name": "Null2", 00:13:23.639 "name": "Null2", 00:13:23.639 "nguid": "23C15B4FB29E402D94F6D792CD73D967", 00:13:23.639 "uuid": "23c15b4f-b29e-402d-94f6-d792cd73d967" 00:13:23.639 } 00:13:23.639 ] 00:13:23.639 }, 00:13:23.639 { 00:13:23.639 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:23.639 "subtype": "NVMe", 00:13:23.639 "listen_addresses": [ 00:13:23.639 { 00:13:23.639 "trtype": "TCP", 00:13:23.639 "adrfam": "IPv4", 00:13:23.639 "traddr": "10.0.0.2", 00:13:23.639 "trsvcid": "4420" 00:13:23.639 } 00:13:23.639 ], 00:13:23.639 "allow_any_host": true, 00:13:23.639 "hosts": [], 00:13:23.639 "serial_number": "SPDK00000000000003", 00:13:23.639 "model_number": "SPDK bdev Controller", 00:13:23.639 "max_namespaces": 32, 00:13:23.639 "min_cntlid": 1, 00:13:23.639 "max_cntlid": 65519, 00:13:23.639 "namespaces": [ 00:13:23.639 { 00:13:23.639 "nsid": 1, 00:13:23.639 "bdev_name": "Null3", 00:13:23.639 "name": "Null3", 00:13:23.639 "nguid": "571BFB7228514CD785C7FEB6C805AAE6", 00:13:23.639 "uuid": "571bfb72-2851-4cd7-85c7-feb6c805aae6" 00:13:23.639 } 00:13:23.639 ] 00:13:23.639 }, 00:13:23.639 { 00:13:23.639 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:23.639 "subtype": "NVMe", 00:13:23.639 "listen_addresses": [ 00:13:23.639 { 00:13:23.639 "trtype": "TCP", 00:13:23.639 "adrfam": "IPv4", 00:13:23.639 "traddr": "10.0.0.2", 00:13:23.639 "trsvcid": "4420" 00:13:23.639 } 00:13:23.639 ], 00:13:23.639 "allow_any_host": true, 00:13:23.639 "hosts": [], 00:13:23.639 "serial_number": "SPDK00000000000004", 00:13:23.639 "model_number": "SPDK bdev Controller", 00:13:23.639 "max_namespaces": 32, 00:13:23.639 "min_cntlid": 1, 00:13:23.639 "max_cntlid": 65519, 00:13:23.639 "namespaces": [ 00:13:23.639 { 00:13:23.639 "nsid": 1, 00:13:23.639 "bdev_name": "Null4", 00:13:23.639 "name": "Null4", 00:13:23.639 "nguid": "4D47F837FDC94FEBA735CB2463A4E737", 00:13:23.639 "uuid": "4d47f837-fdc9-4feb-a735-cb2463a4e737" 00:13:23.639 } 00:13:23.639 ] 00:13:23.639 } 00:13:23.639 ] 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:23.639 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:23.640 10:23:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:23.640 rmmod nvme_tcp 00:13:23.640 rmmod nvme_fabrics 00:13:23.640 rmmod nvme_keyring 00:13:23.640 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:23.640 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:13:23.640 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:13:23.640 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2487675 ']' 00:13:23.640 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2487675 00:13:23.640 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2487675 ']' 00:13:23.640 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2487675 00:13:23.640 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:23.640 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.640 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2487675 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2487675' 00:13:23.898 killing process with pid 2487675 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2487675 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2487675 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:23.898 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:13:24.157 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:24.157 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:24.157 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.157 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:24.157 10:23:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.062 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:26.062 00:13:26.062 real 0m5.807s 00:13:26.062 user 0m4.947s 00:13:26.062 sys 0m2.023s 00:13:26.062 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.062 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:26.062 ************************************ 00:13:26.062 END TEST nvmf_target_discovery 00:13:26.062 ************************************ 00:13:26.062 10:23:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:26.062 10:23:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:26.062 10:23:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.062 10:23:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.062 ************************************ 00:13:26.062 START TEST nvmf_referrals 00:13:26.062 ************************************ 00:13:26.062 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:26.062 * Looking for test storage... 00:13:26.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.062 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:26.062 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:13:26.062 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.331 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:26.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.331 --rc genhtml_branch_coverage=1 00:13:26.331 --rc genhtml_function_coverage=1 00:13:26.331 --rc genhtml_legend=1 00:13:26.331 --rc geninfo_all_blocks=1 00:13:26.331 --rc geninfo_unexecuted_blocks=1 00:13:26.331 00:13:26.331 ' 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:26.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.332 --rc genhtml_branch_coverage=1 00:13:26.332 --rc genhtml_function_coverage=1 00:13:26.332 --rc genhtml_legend=1 00:13:26.332 --rc geninfo_all_blocks=1 00:13:26.332 --rc geninfo_unexecuted_blocks=1 00:13:26.332 00:13:26.332 ' 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:26.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.332 --rc genhtml_branch_coverage=1 00:13:26.332 --rc genhtml_function_coverage=1 00:13:26.332 --rc genhtml_legend=1 00:13:26.332 --rc geninfo_all_blocks=1 00:13:26.332 --rc geninfo_unexecuted_blocks=1 00:13:26.332 00:13:26.332 ' 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:26.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.332 --rc genhtml_branch_coverage=1 00:13:26.332 --rc genhtml_function_coverage=1 00:13:26.332 --rc genhtml_legend=1 00:13:26.332 --rc geninfo_all_blocks=1 00:13:26.332 --rc geninfo_unexecuted_blocks=1 00:13:26.332 00:13:26.332 ' 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:26.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:13:26.332 10:23:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:28.871 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:28.871 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:28.871 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:28.872 Found net devices under 0000:09:00.0: cvl_0_0 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:28.872 Found net devices under 0000:09:00.1: cvl_0_1 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:28.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:13:28.872 00:13:28.872 --- 10.0.0.2 ping statistics --- 00:13:28.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.872 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:13:28.872 00:13:28.872 --- 10.0.0.1 ping statistics --- 00:13:28.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.872 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2489832 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2489832 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2489832 ']' 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.872 10:24:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.872 [2024-12-09 10:24:00.935488] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:13:28.872 [2024-12-09 10:24:00.935588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.872 [2024-12-09 10:24:01.009102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.872 [2024-12-09 10:24:01.070001] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.872 [2024-12-09 10:24:01.070054] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.872 [2024-12-09 10:24:01.070082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.872 [2024-12-09 10:24:01.070093] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.872 [2024-12-09 10:24:01.070104] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.872 [2024-12-09 10:24:01.071778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.873 [2024-12-09 10:24:01.071841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.873 [2024-12-09 10:24:01.071874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.873 [2024-12-09 10:24:01.071876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.873 [2024-12-09 10:24:01.242606] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.873 [2024-12-09 10:24:01.262330] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:28.873 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.131 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:29.131 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:29.131 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:29.131 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.131 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.131 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:29.131 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:29.132 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.390 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:29.648 10:24:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:29.648 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:29.648 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:29.648 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:29.648 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:29.648 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:29.648 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:29.648 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:29.905 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:29.906 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:29.906 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:29.906 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:29.906 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:29.906 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:30.164 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.422 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:30.680 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:30.680 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:30.680 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.680 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.680 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.680 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:30.680 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:30.680 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.680 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:30.680 10:24:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.680 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:30.680 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:30.680 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:30.680 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:30.680 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:30.680 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:30.680 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:30.938 rmmod nvme_tcp 00:13:30.938 rmmod nvme_fabrics 00:13:30.938 rmmod nvme_keyring 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2489832 ']' 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2489832 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2489832 ']' 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2489832 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2489832 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2489832' 00:13:30.938 killing process with pid 2489832 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2489832 00:13:30.938 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2489832 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:31.197 10:24:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:33.756 00:13:33.756 real 0m7.234s 00:13:33.756 user 0m11.635s 00:13:33.756 sys 0m2.350s 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:33.756 ************************************ 00:13:33.756 END TEST nvmf_referrals 00:13:33.756 ************************************ 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:33.756 ************************************ 00:13:33.756 START TEST nvmf_connect_disconnect 00:13:33.756 ************************************ 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:33.756 * Looking for test storage... 00:13:33.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:33.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.756 --rc genhtml_branch_coverage=1 00:13:33.756 --rc genhtml_function_coverage=1 00:13:33.756 --rc genhtml_legend=1 00:13:33.756 --rc geninfo_all_blocks=1 00:13:33.756 --rc geninfo_unexecuted_blocks=1 00:13:33.756 00:13:33.756 ' 00:13:33.756 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:33.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.756 --rc genhtml_branch_coverage=1 00:13:33.756 --rc genhtml_function_coverage=1 00:13:33.756 --rc genhtml_legend=1 00:13:33.756 --rc geninfo_all_blocks=1 00:13:33.756 --rc geninfo_unexecuted_blocks=1 00:13:33.756 00:13:33.756 ' 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:33.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.757 --rc genhtml_branch_coverage=1 00:13:33.757 --rc genhtml_function_coverage=1 00:13:33.757 --rc genhtml_legend=1 00:13:33.757 --rc geninfo_all_blocks=1 00:13:33.757 --rc geninfo_unexecuted_blocks=1 00:13:33.757 00:13:33.757 ' 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:33.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:33.757 --rc genhtml_branch_coverage=1 00:13:33.757 --rc genhtml_function_coverage=1 00:13:33.757 --rc genhtml_legend=1 00:13:33.757 --rc geninfo_all_blocks=1 00:13:33.757 --rc geninfo_unexecuted_blocks=1 00:13:33.757 00:13:33.757 ' 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:33.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:33.757 10:24:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:35.659 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:35.659 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:35.659 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:35.918 Found net devices under 0000:09:00.0: cvl_0_0 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:35.918 Found net devices under 0000:09:00.1: cvl_0_1 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:35.918 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:35.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:13:35.919 00:13:35.919 --- 10.0.0.2 ping statistics --- 00:13:35.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.919 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:35.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:13:35.919 00:13:35.919 --- 10.0.0.1 ping statistics --- 00:13:35.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.919 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2492583 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2492583 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2492583 ']' 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:35.919 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:35.919 [2024-12-09 10:24:08.308057] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:13:35.919 [2024-12-09 10:24:08.308137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:36.177 [2024-12-09 10:24:08.381958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:36.177 [2024-12-09 10:24:08.440866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:36.177 [2024-12-09 10:24:08.440917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:36.177 [2024-12-09 10:24:08.440940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:36.177 [2024-12-09 10:24:08.440953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:36.178 [2024-12-09 10:24:08.440963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:36.178 [2024-12-09 10:24:08.442630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.178 [2024-12-09 10:24:08.442697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.178 [2024-12-09 10:24:08.442768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.178 [2024-12-09 10:24:08.442764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.178 [2024-12-09 10:24:08.585860] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.178 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:36.435 [2024-12-09 10:24:08.647263] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:13:36.435 10:24:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:38.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:50.591 rmmod nvme_tcp 00:13:50.591 rmmod nvme_fabrics 00:13:50.591 rmmod nvme_keyring 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2492583 ']' 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2492583 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2492583 ']' 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2492583 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2492583 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.591 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2492583' 00:13:50.591 killing process with pid 2492583 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2492583 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2492583 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:50.592 10:24:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.497 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:52.497 00:13:52.497 real 0m19.160s 00:13:52.497 user 0m57.057s 00:13:52.497 sys 0m3.575s 00:13:52.497 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.497 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:52.497 ************************************ 00:13:52.497 END TEST nvmf_connect_disconnect 00:13:52.497 ************************************ 00:13:52.497 10:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:52.497 10:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:52.497 10:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.497 10:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:52.755 ************************************ 00:13:52.755 START TEST nvmf_multitarget 00:13:52.755 ************************************ 00:13:52.755 10:24:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:52.755 * Looking for test storage... 00:13:52.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:52.755 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:52.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.756 --rc genhtml_branch_coverage=1 00:13:52.756 --rc genhtml_function_coverage=1 00:13:52.756 --rc genhtml_legend=1 00:13:52.756 --rc geninfo_all_blocks=1 00:13:52.756 --rc geninfo_unexecuted_blocks=1 00:13:52.756 00:13:52.756 ' 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:52.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.756 --rc genhtml_branch_coverage=1 00:13:52.756 --rc genhtml_function_coverage=1 00:13:52.756 --rc genhtml_legend=1 00:13:52.756 --rc geninfo_all_blocks=1 00:13:52.756 --rc geninfo_unexecuted_blocks=1 00:13:52.756 00:13:52.756 ' 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:52.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.756 --rc genhtml_branch_coverage=1 00:13:52.756 --rc genhtml_function_coverage=1 00:13:52.756 --rc genhtml_legend=1 00:13:52.756 --rc geninfo_all_blocks=1 00:13:52.756 --rc geninfo_unexecuted_blocks=1 00:13:52.756 00:13:52.756 ' 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:52.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:52.756 --rc genhtml_branch_coverage=1 00:13:52.756 --rc genhtml_function_coverage=1 00:13:52.756 --rc genhtml_legend=1 00:13:52.756 --rc geninfo_all_blocks=1 00:13:52.756 --rc geninfo_unexecuted_blocks=1 00:13:52.756 00:13:52.756 ' 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:52.756 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:52.756 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:52.757 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:52.757 10:24:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:55.286 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:55.286 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:55.286 Found net devices under 0000:09:00.0: cvl_0_0 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:55.286 Found net devices under 0000:09:00.1: cvl_0_1 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:55.286 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:55.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:13:55.286 00:13:55.286 --- 10.0.0.2 ping statistics --- 00:13:55.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.286 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:13:55.287 00:13:55.287 --- 10.0.0.1 ping statistics --- 00:13:55.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.287 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2496473 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2496473 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2496473 ']' 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:55.287 [2024-12-09 10:24:27.473093] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:13:55.287 [2024-12-09 10:24:27.473212] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.287 [2024-12-09 10:24:27.542882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.287 [2024-12-09 10:24:27.597581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.287 [2024-12-09 10:24:27.597638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.287 [2024-12-09 10:24:27.597661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.287 [2024-12-09 10:24:27.597672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.287 [2024-12-09 10:24:27.597682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.287 [2024-12-09 10:24:27.599281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.287 [2024-12-09 10:24:27.599342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.287 [2024-12-09 10:24:27.599410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.287 [2024-12-09 10:24:27.599413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:55.287 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:55.544 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.544 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:55.544 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:55.544 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:55.544 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:55.544 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:55.544 "nvmf_tgt_1" 00:13:55.544 10:24:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:55.801 "nvmf_tgt_2" 00:13:55.801 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:55.801 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:55.801 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:55.801 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:56.058 true 00:13:56.058 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:56.058 true 00:13:56.058 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:56.058 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:56.317 rmmod nvme_tcp 00:13:56.317 rmmod nvme_fabrics 00:13:56.317 rmmod nvme_keyring 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2496473 ']' 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2496473 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2496473 ']' 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2496473 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2496473 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2496473' 00:13:56.317 killing process with pid 2496473 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2496473 00:13:56.317 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2496473 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:56.577 10:24:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.115 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:59.115 00:13:59.115 real 0m6.056s 00:13:59.115 user 0m6.895s 00:13:59.115 sys 0m2.084s 00:13:59.115 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.115 10:24:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:59.115 ************************************ 00:13:59.115 END TEST nvmf_multitarget 00:13:59.115 ************************************ 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:59.115 ************************************ 00:13:59.115 START TEST nvmf_rpc 00:13:59.115 ************************************ 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:59.115 * Looking for test storage... 00:13:59.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:59.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.115 --rc genhtml_branch_coverage=1 00:13:59.115 --rc genhtml_function_coverage=1 00:13:59.115 --rc genhtml_legend=1 00:13:59.115 --rc geninfo_all_blocks=1 00:13:59.115 --rc geninfo_unexecuted_blocks=1 00:13:59.115 00:13:59.115 ' 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:59.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.115 --rc genhtml_branch_coverage=1 00:13:59.115 --rc genhtml_function_coverage=1 00:13:59.115 --rc genhtml_legend=1 00:13:59.115 --rc geninfo_all_blocks=1 00:13:59.115 --rc geninfo_unexecuted_blocks=1 00:13:59.115 00:13:59.115 ' 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:59.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.115 --rc genhtml_branch_coverage=1 00:13:59.115 --rc genhtml_function_coverage=1 00:13:59.115 --rc genhtml_legend=1 00:13:59.115 --rc geninfo_all_blocks=1 00:13:59.115 --rc geninfo_unexecuted_blocks=1 00:13:59.115 00:13:59.115 ' 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:59.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:59.115 --rc genhtml_branch_coverage=1 00:13:59.115 --rc genhtml_function_coverage=1 00:13:59.115 --rc genhtml_legend=1 00:13:59.115 --rc geninfo_all_blocks=1 00:13:59.115 --rc geninfo_unexecuted_blocks=1 00:13:59.115 00:13:59.115 ' 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.115 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:59.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:59.116 10:24:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:01.100 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:01.101 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:01.101 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:01.101 Found net devices under 0000:09:00.0: cvl_0_0 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:01.101 Found net devices under 0000:09:00.1: cvl_0_1 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:01.101 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:01.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:01.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:14:01.374 00:14:01.374 --- 10.0.0.2 ping statistics --- 00:14:01.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.374 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:01.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:01.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:14:01.374 00:14:01.374 --- 10.0.0.1 ping statistics --- 00:14:01.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:01.374 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2498585 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2498585 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2498585 ']' 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:01.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:01.374 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.374 [2024-12-09 10:24:33.628489] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:14:01.374 [2024-12-09 10:24:33.628591] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:01.374 [2024-12-09 10:24:33.698299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:01.374 [2024-12-09 10:24:33.752778] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:01.374 [2024-12-09 10:24:33.752836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:01.374 [2024-12-09 10:24:33.752860] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:01.374 [2024-12-09 10:24:33.752871] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:01.374 [2024-12-09 10:24:33.752880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:01.374 [2024-12-09 10:24:33.754648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.374 [2024-12-09 10:24:33.754756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:01.374 [2024-12-09 10:24:33.754831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:01.374 [2024-12-09 10:24:33.754834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:01.632 "tick_rate": 2700000000, 00:14:01.632 "poll_groups": [ 00:14:01.632 { 00:14:01.632 "name": "nvmf_tgt_poll_group_000", 00:14:01.632 "admin_qpairs": 0, 00:14:01.632 "io_qpairs": 0, 00:14:01.632 "current_admin_qpairs": 0, 00:14:01.632 "current_io_qpairs": 0, 00:14:01.632 "pending_bdev_io": 0, 00:14:01.632 "completed_nvme_io": 0, 00:14:01.632 "transports": [] 00:14:01.632 }, 00:14:01.632 { 00:14:01.632 "name": "nvmf_tgt_poll_group_001", 00:14:01.632 "admin_qpairs": 0, 00:14:01.632 "io_qpairs": 0, 00:14:01.632 "current_admin_qpairs": 0, 00:14:01.632 "current_io_qpairs": 0, 00:14:01.632 "pending_bdev_io": 0, 00:14:01.632 "completed_nvme_io": 0, 00:14:01.632 "transports": [] 00:14:01.632 }, 00:14:01.632 { 00:14:01.632 "name": "nvmf_tgt_poll_group_002", 00:14:01.632 "admin_qpairs": 0, 00:14:01.632 "io_qpairs": 0, 00:14:01.632 "current_admin_qpairs": 0, 00:14:01.632 "current_io_qpairs": 0, 00:14:01.632 "pending_bdev_io": 0, 00:14:01.632 "completed_nvme_io": 0, 00:14:01.632 "transports": [] 00:14:01.632 }, 00:14:01.632 { 00:14:01.632 "name": "nvmf_tgt_poll_group_003", 00:14:01.632 "admin_qpairs": 0, 00:14:01.632 "io_qpairs": 0, 00:14:01.632 "current_admin_qpairs": 0, 00:14:01.632 "current_io_qpairs": 0, 00:14:01.632 "pending_bdev_io": 0, 00:14:01.632 "completed_nvme_io": 0, 00:14:01.632 "transports": [] 00:14:01.632 } 00:14:01.632 ] 00:14:01.632 }' 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.632 10:24:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.632 [2024-12-09 10:24:33.996046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:01.632 "tick_rate": 2700000000, 00:14:01.632 "poll_groups": [ 00:14:01.632 { 00:14:01.632 "name": "nvmf_tgt_poll_group_000", 00:14:01.632 "admin_qpairs": 0, 00:14:01.632 "io_qpairs": 0, 00:14:01.632 "current_admin_qpairs": 0, 00:14:01.632 "current_io_qpairs": 0, 00:14:01.632 "pending_bdev_io": 0, 00:14:01.632 "completed_nvme_io": 0, 00:14:01.632 "transports": [ 00:14:01.632 { 00:14:01.632 "trtype": "TCP" 00:14:01.632 } 00:14:01.632 ] 00:14:01.632 }, 00:14:01.632 { 00:14:01.632 "name": "nvmf_tgt_poll_group_001", 00:14:01.632 "admin_qpairs": 0, 00:14:01.632 "io_qpairs": 0, 00:14:01.632 "current_admin_qpairs": 0, 00:14:01.632 "current_io_qpairs": 0, 00:14:01.632 "pending_bdev_io": 0, 00:14:01.632 "completed_nvme_io": 0, 00:14:01.632 "transports": [ 00:14:01.632 { 00:14:01.632 "trtype": "TCP" 00:14:01.632 } 00:14:01.632 ] 00:14:01.632 }, 00:14:01.632 { 00:14:01.632 "name": "nvmf_tgt_poll_group_002", 00:14:01.632 "admin_qpairs": 0, 00:14:01.632 "io_qpairs": 0, 00:14:01.632 "current_admin_qpairs": 0, 00:14:01.632 "current_io_qpairs": 0, 00:14:01.632 "pending_bdev_io": 0, 00:14:01.632 "completed_nvme_io": 0, 00:14:01.632 "transports": [ 00:14:01.632 { 00:14:01.632 "trtype": "TCP" 00:14:01.632 } 00:14:01.632 ] 00:14:01.632 }, 00:14:01.632 { 00:14:01.632 "name": "nvmf_tgt_poll_group_003", 00:14:01.632 "admin_qpairs": 0, 00:14:01.632 "io_qpairs": 0, 00:14:01.632 "current_admin_qpairs": 0, 00:14:01.632 "current_io_qpairs": 0, 00:14:01.632 "pending_bdev_io": 0, 00:14:01.632 "completed_nvme_io": 0, 00:14:01.632 "transports": [ 00:14:01.632 { 00:14:01.632 "trtype": "TCP" 00:14:01.632 } 00:14:01.632 ] 00:14:01.632 } 00:14:01.632 ] 00:14:01.632 }' 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:01.632 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.891 Malloc1 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.891 [2024-12-09 10:24:34.165672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:01.891 [2024-12-09 10:24:34.188240] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:14:01.891 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:01.891 could not add new controller: failed to write to nvme-fabrics device 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.891 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:02.456 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:02.456 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:02.456 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:02.456 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:02.456 10:24:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:04.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:04.979 [2024-12-09 10:24:36.951034] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:14:04.979 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:04.979 could not add new controller: failed to write to nvme-fabrics device 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:04.979 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:04.980 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:04.980 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:04.980 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.980 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.980 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.980 10:24:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:05.237 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:05.237 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:05.237 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:05.237 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:05.237 10:24:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:07.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.825 [2024-12-09 10:24:39.837306] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.825 10:24:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:08.083 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:08.083 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:08.083 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:08.083 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:08.083 10:24:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.613 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.614 [2024-12-09 10:24:42.616807] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.614 10:24:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:10.872 10:24:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.872 10:24:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:10.872 10:24:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.872 10:24:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:10.872 10:24:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:13.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.398 [2024-12-09 10:24:45.361838] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.398 10:24:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:13.656 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:13.656 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:13.656 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.656 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:13.656 10:24:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:16.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:16.180 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.181 [2024-12-09 10:24:48.186396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.181 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:16.748 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.748 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:16.748 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.748 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:16.748 10:24:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:18.649 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:18.649 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:18.649 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.649 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:18.649 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.649 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:18.649 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.650 10:24:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.650 [2024-12-09 10:24:51.019082] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.650 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:19.583 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:19.583 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:19.583 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.583 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:19.583 10:24:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:21.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:21.481 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 [2024-12-09 10:24:53.849754] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 [2024-12-09 10:24:53.897793] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.482 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 [2024-12-09 10:24:53.945953] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 [2024-12-09 10:24:53.994106] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:21.739 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.740 [2024-12-09 10:24:54.042328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:21.740 "tick_rate": 2700000000, 00:14:21.740 "poll_groups": [ 00:14:21.740 { 00:14:21.740 "name": "nvmf_tgt_poll_group_000", 00:14:21.740 "admin_qpairs": 2, 00:14:21.740 "io_qpairs": 84, 00:14:21.740 "current_admin_qpairs": 0, 00:14:21.740 "current_io_qpairs": 0, 00:14:21.740 "pending_bdev_io": 0, 00:14:21.740 "completed_nvme_io": 215, 00:14:21.740 "transports": [ 00:14:21.740 { 00:14:21.740 "trtype": "TCP" 00:14:21.740 } 00:14:21.740 ] 00:14:21.740 }, 00:14:21.740 { 00:14:21.740 "name": "nvmf_tgt_poll_group_001", 00:14:21.740 "admin_qpairs": 2, 00:14:21.740 "io_qpairs": 84, 00:14:21.740 "current_admin_qpairs": 0, 00:14:21.740 "current_io_qpairs": 0, 00:14:21.740 "pending_bdev_io": 0, 00:14:21.740 "completed_nvme_io": 136, 00:14:21.740 "transports": [ 00:14:21.740 { 00:14:21.740 "trtype": "TCP" 00:14:21.740 } 00:14:21.740 ] 00:14:21.740 }, 00:14:21.740 { 00:14:21.740 "name": "nvmf_tgt_poll_group_002", 00:14:21.740 "admin_qpairs": 1, 00:14:21.740 "io_qpairs": 84, 00:14:21.740 "current_admin_qpairs": 0, 00:14:21.740 "current_io_qpairs": 0, 00:14:21.740 "pending_bdev_io": 0, 00:14:21.740 "completed_nvme_io": 151, 00:14:21.740 "transports": [ 00:14:21.740 { 00:14:21.740 "trtype": "TCP" 00:14:21.740 } 00:14:21.740 ] 00:14:21.740 }, 00:14:21.740 { 00:14:21.740 "name": "nvmf_tgt_poll_group_003", 00:14:21.740 "admin_qpairs": 2, 00:14:21.740 "io_qpairs": 84, 00:14:21.740 "current_admin_qpairs": 0, 00:14:21.740 "current_io_qpairs": 0, 00:14:21.740 "pending_bdev_io": 0, 00:14:21.740 "completed_nvme_io": 184, 00:14:21.740 "transports": [ 00:14:21.740 { 00:14:21.740 "trtype": "TCP" 00:14:21.740 } 00:14:21.740 ] 00:14:21.740 } 00:14:21.740 ] 00:14:21.740 }' 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:21.740 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:21.997 rmmod nvme_tcp 00:14:21.997 rmmod nvme_fabrics 00:14:21.997 rmmod nvme_keyring 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2498585 ']' 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2498585 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2498585 ']' 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2498585 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498585 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498585' 00:14:21.997 killing process with pid 2498585 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2498585 00:14:21.997 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2498585 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.255 10:24:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:24.786 00:14:24.786 real 0m25.592s 00:14:24.786 user 1m22.579s 00:14:24.786 sys 0m4.307s 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:24.786 ************************************ 00:14:24.786 END TEST nvmf_rpc 00:14:24.786 ************************************ 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:24.786 ************************************ 00:14:24.786 START TEST nvmf_invalid 00:14:24.786 ************************************ 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:14:24.786 * Looking for test storage... 00:14:24.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:24.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.786 --rc genhtml_branch_coverage=1 00:14:24.786 --rc genhtml_function_coverage=1 00:14:24.786 --rc genhtml_legend=1 00:14:24.786 --rc geninfo_all_blocks=1 00:14:24.786 --rc geninfo_unexecuted_blocks=1 00:14:24.786 00:14:24.786 ' 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:24.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.786 --rc genhtml_branch_coverage=1 00:14:24.786 --rc genhtml_function_coverage=1 00:14:24.786 --rc genhtml_legend=1 00:14:24.786 --rc geninfo_all_blocks=1 00:14:24.786 --rc geninfo_unexecuted_blocks=1 00:14:24.786 00:14:24.786 ' 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:24.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.786 --rc genhtml_branch_coverage=1 00:14:24.786 --rc genhtml_function_coverage=1 00:14:24.786 --rc genhtml_legend=1 00:14:24.786 --rc geninfo_all_blocks=1 00:14:24.786 --rc geninfo_unexecuted_blocks=1 00:14:24.786 00:14:24.786 ' 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:24.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.786 --rc genhtml_branch_coverage=1 00:14:24.786 --rc genhtml_function_coverage=1 00:14:24.786 --rc genhtml_legend=1 00:14:24.786 --rc geninfo_all_blocks=1 00:14:24.786 --rc geninfo_unexecuted_blocks=1 00:14:24.786 00:14:24.786 ' 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.786 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:24.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:14:24.787 10:24:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:26.691 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:26.691 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:26.691 Found net devices under 0000:09:00.0: cvl_0_0 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:26.691 Found net devices under 0000:09:00.1: cvl_0_1 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:26.691 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:26.950 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:26.950 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:26.950 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:26.950 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:26.950 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:26.950 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:14:26.950 00:14:26.950 --- 10.0.0.2 ping statistics --- 00:14:26.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.950 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:14:26.950 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:26.950 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:26.950 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:14:26.950 00:14:26.950 --- 10.0.0.1 ping statistics --- 00:14:26.950 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:26.950 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:14:26.950 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:26.950 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:14:26.950 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:26.950 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:26.950 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2503199 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2503199 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2503199 ']' 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.951 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:26.951 [2024-12-09 10:24:59.239927] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:14:26.951 [2024-12-09 10:24:59.240029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:26.951 [2024-12-09 10:24:59.313485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:26.951 [2024-12-09 10:24:59.371204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:26.951 [2024-12-09 10:24:59.371254] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:26.951 [2024-12-09 10:24:59.371276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:26.951 [2024-12-09 10:24:59.371286] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:26.951 [2024-12-09 10:24:59.371295] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:26.951 [2024-12-09 10:24:59.372894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.951 [2024-12-09 10:24:59.373016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:26.951 [2024-12-09 10:24:59.373133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:26.951 [2024-12-09 10:24:59.373137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.209 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.209 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:14:27.209 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:27.209 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:27.209 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:27.209 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:27.209 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:27.209 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20717 00:14:27.468 [2024-12-09 10:24:59.823643] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:27.468 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:27.468 { 00:14:27.468 "nqn": "nqn.2016-06.io.spdk:cnode20717", 00:14:27.468 "tgt_name": "foobar", 00:14:27.468 "method": "nvmf_create_subsystem", 00:14:27.468 "req_id": 1 00:14:27.468 } 00:14:27.468 Got JSON-RPC error response 00:14:27.468 response: 00:14:27.468 { 00:14:27.468 "code": -32603, 00:14:27.468 "message": "Unable to find target foobar" 00:14:27.468 }' 00:14:27.468 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:27.468 { 00:14:27.468 "nqn": "nqn.2016-06.io.spdk:cnode20717", 00:14:27.468 "tgt_name": "foobar", 00:14:27.468 "method": "nvmf_create_subsystem", 00:14:27.468 "req_id": 1 00:14:27.468 } 00:14:27.468 Got JSON-RPC error response 00:14:27.468 response: 00:14:27.468 { 00:14:27.468 "code": -32603, 00:14:27.468 "message": "Unable to find target foobar" 00:14:27.468 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:27.468 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:27.468 10:24:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27698 00:14:27.727 [2024-12-09 10:25:00.152802] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27698: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:27.985 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:27.985 { 00:14:27.985 "nqn": "nqn.2016-06.io.spdk:cnode27698", 00:14:27.985 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:27.985 "method": "nvmf_create_subsystem", 00:14:27.985 "req_id": 1 00:14:27.985 } 00:14:27.985 Got JSON-RPC error response 00:14:27.985 response: 00:14:27.985 { 00:14:27.985 "code": -32602, 00:14:27.985 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:27.985 }' 00:14:27.985 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:27.985 { 00:14:27.985 "nqn": "nqn.2016-06.io.spdk:cnode27698", 00:14:27.985 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:27.985 "method": "nvmf_create_subsystem", 00:14:27.985 "req_id": 1 00:14:27.985 } 00:14:27.985 Got JSON-RPC error response 00:14:27.985 response: 00:14:27.985 { 00:14:27.985 "code": -32602, 00:14:27.985 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:27.985 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:27.985 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:27.985 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2780 00:14:28.244 [2024-12-09 10:25:00.465865] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2780: invalid model number 'SPDK_Controller' 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:28.244 { 00:14:28.244 "nqn": "nqn.2016-06.io.spdk:cnode2780", 00:14:28.244 "model_number": "SPDK_Controller\u001f", 00:14:28.244 "method": "nvmf_create_subsystem", 00:14:28.244 "req_id": 1 00:14:28.244 } 00:14:28.244 Got JSON-RPC error response 00:14:28.244 response: 00:14:28.244 { 00:14:28.244 "code": -32602, 00:14:28.244 "message": "Invalid MN SPDK_Controller\u001f" 00:14:28.244 }' 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:28.244 { 00:14:28.244 "nqn": "nqn.2016-06.io.spdk:cnode2780", 00:14:28.244 "model_number": "SPDK_Controller\u001f", 00:14:28.244 "method": "nvmf_create_subsystem", 00:14:28.244 "req_id": 1 00:14:28.244 } 00:14:28.244 Got JSON-RPC error response 00:14:28.244 response: 00:14:28.244 { 00:14:28.244 "code": -32602, 00:14:28.244 "message": "Invalid MN SPDK_Controller\u001f" 00:14:28.244 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.244 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ONEECb~"}3]%Z*H'\''aT.d~' 00:14:28.245 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'ONEECb~"}3]%Z*H'\''aT.d~' nqn.2016-06.io.spdk:cnode17190 00:14:28.504 [2024-12-09 10:25:00.822907] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17190: invalid serial number 'ONEECb~"}3]%Z*H'aT.d~' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:28.505 { 00:14:28.505 "nqn": "nqn.2016-06.io.spdk:cnode17190", 00:14:28.505 "serial_number": "ONEECb~\"}3]%Z*H'\''aT.d~", 00:14:28.505 "method": "nvmf_create_subsystem", 00:14:28.505 "req_id": 1 00:14:28.505 } 00:14:28.505 Got JSON-RPC error response 00:14:28.505 response: 00:14:28.505 { 00:14:28.505 "code": -32602, 00:14:28.505 "message": "Invalid SN ONEECb~\"}3]%Z*H'\''aT.d~" 00:14:28.505 }' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:28.505 { 00:14:28.505 "nqn": "nqn.2016-06.io.spdk:cnode17190", 00:14:28.505 "serial_number": "ONEECb~\"}3]%Z*H'aT.d~", 00:14:28.505 "method": "nvmf_create_subsystem", 00:14:28.505 "req_id": 1 00:14:28.505 } 00:14:28.505 Got JSON-RPC error response 00:14:28.505 response: 00:14:28.505 { 00:14:28.505 "code": -32602, 00:14:28.505 "message": "Invalid SN ONEECb~\"}3]%Z*H'aT.d~" 00:14:28.505 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.505 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:14:28.506 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.764 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.764 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:14:28.764 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:14:28.764 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:14:28.764 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.764 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.764 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ s == \- ]] 00:14:28.765 10:25:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 's^5E-X$tEXJuH|!o@9HS@ EKUy&z*/|fD#kuDpP;0' 00:14:28.765 10:25:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 's^5E-X$tEXJuH|!o@9HS@ EKUy&z*/|fD#kuDpP;0' nqn.2016-06.io.spdk:cnode17295 00:14:29.023 [2024-12-09 10:25:01.304569] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17295: invalid model number 's^5E-X$tEXJuH|!o@9HS@ EKUy&z*/|fD#kuDpP;0' 00:14:29.023 10:25:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:29.023 { 00:14:29.023 "nqn": "nqn.2016-06.io.spdk:cnode17295", 00:14:29.023 "model_number": "s^5E-X$tEXJuH|!o@9HS@ EKUy&z*/|fD#kuDpP;0", 00:14:29.023 "method": "nvmf_create_subsystem", 00:14:29.023 "req_id": 1 00:14:29.023 } 00:14:29.023 Got JSON-RPC error response 00:14:29.023 response: 00:14:29.023 { 00:14:29.023 "code": -32602, 00:14:29.023 "message": "Invalid MN s^5E-X$tEXJuH|!o@9HS@ EKUy&z*/|fD#kuDpP;0" 00:14:29.023 }' 00:14:29.023 10:25:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:29.023 { 00:14:29.023 "nqn": "nqn.2016-06.io.spdk:cnode17295", 00:14:29.023 "model_number": "s^5E-X$tEXJuH|!o@9HS@ EKUy&z*/|fD#kuDpP;0", 00:14:29.023 "method": "nvmf_create_subsystem", 00:14:29.023 "req_id": 1 00:14:29.023 } 00:14:29.023 Got JSON-RPC error response 00:14:29.023 response: 00:14:29.023 { 00:14:29.023 "code": -32602, 00:14:29.023 "message": "Invalid MN s^5E-X$tEXJuH|!o@9HS@ EKUy&z*/|fD#kuDpP;0" 00:14:29.023 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:29.023 10:25:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:29.282 [2024-12-09 10:25:01.613677] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.282 10:25:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:29.541 10:25:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:29.541 10:25:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:29.541 10:25:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:29.541 10:25:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:29.541 10:25:01 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:30.107 [2024-12-09 10:25:02.275777] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:30.107 10:25:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:30.107 { 00:14:30.107 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:30.107 "listen_address": { 00:14:30.107 "trtype": "tcp", 00:14:30.107 "traddr": "", 00:14:30.107 "trsvcid": "4421" 00:14:30.107 }, 00:14:30.107 "method": "nvmf_subsystem_remove_listener", 00:14:30.107 "req_id": 1 00:14:30.107 } 00:14:30.107 Got JSON-RPC error response 00:14:30.107 response: 00:14:30.107 { 00:14:30.107 "code": -32602, 00:14:30.107 "message": "Invalid parameters" 00:14:30.107 }' 00:14:30.107 10:25:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:30.107 { 00:14:30.107 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:30.107 "listen_address": { 00:14:30.107 "trtype": "tcp", 00:14:30.107 "traddr": "", 00:14:30.107 "trsvcid": "4421" 00:14:30.107 }, 00:14:30.107 "method": "nvmf_subsystem_remove_listener", 00:14:30.107 "req_id": 1 00:14:30.107 } 00:14:30.107 Got JSON-RPC error response 00:14:30.107 response: 00:14:30.107 { 00:14:30.107 "code": -32602, 00:14:30.107 "message": "Invalid parameters" 00:14:30.107 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:30.107 10:25:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14375 -i 0 00:14:30.365 [2024-12-09 10:25:02.604808] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14375: invalid cntlid range [0-65519] 00:14:30.365 10:25:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:30.365 { 00:14:30.365 "nqn": "nqn.2016-06.io.spdk:cnode14375", 00:14:30.365 "min_cntlid": 0, 00:14:30.365 "method": "nvmf_create_subsystem", 00:14:30.365 "req_id": 1 00:14:30.365 } 00:14:30.365 Got JSON-RPC error response 00:14:30.365 response: 00:14:30.365 { 00:14:30.365 "code": -32602, 00:14:30.365 "message": "Invalid cntlid range [0-65519]" 00:14:30.365 }' 00:14:30.365 10:25:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:30.365 { 00:14:30.365 "nqn": "nqn.2016-06.io.spdk:cnode14375", 00:14:30.365 "min_cntlid": 0, 00:14:30.365 "method": "nvmf_create_subsystem", 00:14:30.365 "req_id": 1 00:14:30.365 } 00:14:30.365 Got JSON-RPC error response 00:14:30.365 response: 00:14:30.365 { 00:14:30.365 "code": -32602, 00:14:30.365 "message": "Invalid cntlid range [0-65519]" 00:14:30.365 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:30.365 10:25:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31063 -i 65520 00:14:30.623 [2024-12-09 10:25:02.881681] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31063: invalid cntlid range [65520-65519] 00:14:30.623 10:25:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:30.623 { 00:14:30.623 "nqn": "nqn.2016-06.io.spdk:cnode31063", 00:14:30.623 "min_cntlid": 65520, 00:14:30.623 "method": "nvmf_create_subsystem", 00:14:30.623 "req_id": 1 00:14:30.623 } 00:14:30.623 Got JSON-RPC error response 00:14:30.623 response: 00:14:30.623 { 00:14:30.623 "code": -32602, 00:14:30.623 "message": "Invalid cntlid range [65520-65519]" 00:14:30.623 }' 00:14:30.623 10:25:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:30.623 { 00:14:30.623 "nqn": "nqn.2016-06.io.spdk:cnode31063", 00:14:30.623 "min_cntlid": 65520, 00:14:30.623 "method": "nvmf_create_subsystem", 00:14:30.623 "req_id": 1 00:14:30.623 } 00:14:30.623 Got JSON-RPC error response 00:14:30.623 response: 00:14:30.623 { 00:14:30.623 "code": -32602, 00:14:30.623 "message": "Invalid cntlid range [65520-65519]" 00:14:30.623 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:30.623 10:25:02 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode751 -I 0 00:14:30.881 [2024-12-09 10:25:03.150572] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode751: invalid cntlid range [1-0] 00:14:30.881 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:30.881 { 00:14:30.881 "nqn": "nqn.2016-06.io.spdk:cnode751", 00:14:30.881 "max_cntlid": 0, 00:14:30.881 "method": "nvmf_create_subsystem", 00:14:30.881 "req_id": 1 00:14:30.881 } 00:14:30.881 Got JSON-RPC error response 00:14:30.881 response: 00:14:30.881 { 00:14:30.881 "code": -32602, 00:14:30.881 "message": "Invalid cntlid range [1-0]" 00:14:30.881 }' 00:14:30.881 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:30.881 { 00:14:30.881 "nqn": "nqn.2016-06.io.spdk:cnode751", 00:14:30.881 "max_cntlid": 0, 00:14:30.881 "method": "nvmf_create_subsystem", 00:14:30.881 "req_id": 1 00:14:30.881 } 00:14:30.881 Got JSON-RPC error response 00:14:30.881 response: 00:14:30.881 { 00:14:30.881 "code": -32602, 00:14:30.881 "message": "Invalid cntlid range [1-0]" 00:14:30.881 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:30.881 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2047 -I 65520 00:14:31.139 [2024-12-09 10:25:03.411450] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2047: invalid cntlid range [1-65520] 00:14:31.139 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:31.139 { 00:14:31.139 "nqn": "nqn.2016-06.io.spdk:cnode2047", 00:14:31.139 "max_cntlid": 65520, 00:14:31.139 "method": "nvmf_create_subsystem", 00:14:31.139 "req_id": 1 00:14:31.139 } 00:14:31.139 Got JSON-RPC error response 00:14:31.139 response: 00:14:31.139 { 00:14:31.139 "code": -32602, 00:14:31.139 "message": "Invalid cntlid range [1-65520]" 00:14:31.139 }' 00:14:31.139 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:31.139 { 00:14:31.139 "nqn": "nqn.2016-06.io.spdk:cnode2047", 00:14:31.139 "max_cntlid": 65520, 00:14:31.139 "method": "nvmf_create_subsystem", 00:14:31.139 "req_id": 1 00:14:31.139 } 00:14:31.139 Got JSON-RPC error response 00:14:31.139 response: 00:14:31.139 { 00:14:31.139 "code": -32602, 00:14:31.139 "message": "Invalid cntlid range [1-65520]" 00:14:31.139 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:31.139 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32038 -i 6 -I 5 00:14:31.426 [2024-12-09 10:25:03.692376] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32038: invalid cntlid range [6-5] 00:14:31.426 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:31.426 { 00:14:31.426 "nqn": "nqn.2016-06.io.spdk:cnode32038", 00:14:31.426 "min_cntlid": 6, 00:14:31.426 "max_cntlid": 5, 00:14:31.426 "method": "nvmf_create_subsystem", 00:14:31.426 "req_id": 1 00:14:31.426 } 00:14:31.426 Got JSON-RPC error response 00:14:31.426 response: 00:14:31.426 { 00:14:31.426 "code": -32602, 00:14:31.426 "message": "Invalid cntlid range [6-5]" 00:14:31.426 }' 00:14:31.426 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:31.426 { 00:14:31.426 "nqn": "nqn.2016-06.io.spdk:cnode32038", 00:14:31.426 "min_cntlid": 6, 00:14:31.426 "max_cntlid": 5, 00:14:31.426 "method": "nvmf_create_subsystem", 00:14:31.426 "req_id": 1 00:14:31.426 } 00:14:31.426 Got JSON-RPC error response 00:14:31.426 response: 00:14:31.426 { 00:14:31.426 "code": -32602, 00:14:31.426 "message": "Invalid cntlid range [6-5]" 00:14:31.426 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:31.426 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:31.426 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:31.426 { 00:14:31.426 "name": "foobar", 00:14:31.426 "method": "nvmf_delete_target", 00:14:31.426 "req_id": 1 00:14:31.426 } 00:14:31.426 Got JSON-RPC error response 00:14:31.426 response: 00:14:31.426 { 00:14:31.427 "code": -32602, 00:14:31.427 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:31.427 }' 00:14:31.427 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:31.427 { 00:14:31.427 "name": "foobar", 00:14:31.427 "method": "nvmf_delete_target", 00:14:31.427 "req_id": 1 00:14:31.427 } 00:14:31.427 Got JSON-RPC error response 00:14:31.427 response: 00:14:31.427 { 00:14:31.427 "code": -32602, 00:14:31.427 "message": "The specified target doesn't exist, cannot delete it." 00:14:31.427 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:31.427 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:31.427 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:31.427 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:31.427 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:31.427 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:31.427 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:31.427 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:31.427 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:31.732 rmmod nvme_tcp 00:14:31.732 rmmod nvme_fabrics 00:14:31.732 rmmod nvme_keyring 00:14:31.732 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:31.732 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2503199 ']' 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2503199 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2503199 ']' 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2503199 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2503199 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2503199' 00:14:31.733 killing process with pid 2503199 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2503199 00:14:31.733 10:25:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2503199 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.991 10:25:04 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.894 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:33.894 00:14:33.894 real 0m9.570s 00:14:33.894 user 0m23.649s 00:14:33.894 sys 0m2.620s 00:14:33.894 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.894 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:33.894 ************************************ 00:14:33.894 END TEST nvmf_invalid 00:14:33.894 ************************************ 00:14:33.894 10:25:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:33.894 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:33.894 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.894 10:25:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.894 ************************************ 00:14:33.894 START TEST nvmf_connect_stress 00:14:33.894 ************************************ 00:14:33.894 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:34.152 * Looking for test storage... 00:14:34.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.152 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:34.152 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:14:34.152 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:34.152 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:34.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.153 --rc genhtml_branch_coverage=1 00:14:34.153 --rc genhtml_function_coverage=1 00:14:34.153 --rc genhtml_legend=1 00:14:34.153 --rc geninfo_all_blocks=1 00:14:34.153 --rc geninfo_unexecuted_blocks=1 00:14:34.153 00:14:34.153 ' 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:34.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.153 --rc genhtml_branch_coverage=1 00:14:34.153 --rc genhtml_function_coverage=1 00:14:34.153 --rc genhtml_legend=1 00:14:34.153 --rc geninfo_all_blocks=1 00:14:34.153 --rc geninfo_unexecuted_blocks=1 00:14:34.153 00:14:34.153 ' 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:34.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.153 --rc genhtml_branch_coverage=1 00:14:34.153 --rc genhtml_function_coverage=1 00:14:34.153 --rc genhtml_legend=1 00:14:34.153 --rc geninfo_all_blocks=1 00:14:34.153 --rc geninfo_unexecuted_blocks=1 00:14:34.153 00:14:34.153 ' 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:34.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:34.153 --rc genhtml_branch_coverage=1 00:14:34.153 --rc genhtml_function_coverage=1 00:14:34.153 --rc genhtml_legend=1 00:14:34.153 --rc geninfo_all_blocks=1 00:14:34.153 --rc geninfo_unexecuted_blocks=1 00:14:34.153 00:14:34.153 ' 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:34.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:34.153 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:34.154 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.154 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:34.154 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:34.154 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:34.154 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.154 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.154 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.154 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:34.154 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:34.154 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:34.154 10:25:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:36.684 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:36.684 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:36.684 Found net devices under 0000:09:00.0: cvl_0_0 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:36.684 Found net devices under 0000:09:00.1: cvl_0_1 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:36.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:14:36.684 00:14:36.684 --- 10.0.0.2 ping statistics --- 00:14:36.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.684 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.684 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.684 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:14:36.684 00:14:36.684 --- 10.0.0.1 ping statistics --- 00:14:36.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.684 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2505852 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2505852 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2505852 ']' 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:36.684 10:25:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.684 [2024-12-09 10:25:08.859299] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:14:36.685 [2024-12-09 10:25:08.859380] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.685 [2024-12-09 10:25:08.928976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:36.685 [2024-12-09 10:25:08.983096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.685 [2024-12-09 10:25:08.983160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.685 [2024-12-09 10:25:08.983189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.685 [2024-12-09 10:25:08.983200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.685 [2024-12-09 10:25:08.983209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.685 [2024-12-09 10:25:08.984722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.685 [2024-12-09 10:25:08.984785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:36.685 [2024-12-09 10:25:08.984788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.685 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.685 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:14:36.685 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:36.685 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:36.685 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 [2024-12-09 10:25:09.135386] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 [2024-12-09 10:25:09.152730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:36.943 NULL1 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2505889 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.943 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.944 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.202 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.202 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:37.202 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.202 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.202 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:37.460 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.460 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:37.460 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.460 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.460 10:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.026 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.026 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:38.026 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.026 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.026 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.284 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.284 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:38.284 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.284 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.284 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.542 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.542 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:38.542 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.542 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.542 10:25:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:38.799 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.799 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:38.799 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.799 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.799 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.057 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.057 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:39.057 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.057 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.057 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.623 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.623 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:39.623 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.623 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.623 10:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:39.882 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.882 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:39.882 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.882 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.882 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.140 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.140 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:40.140 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.140 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.140 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.398 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.398 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:40.398 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.398 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.398 10:25:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:40.656 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.656 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:40.656 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.656 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.656 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.222 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.222 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:41.222 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.222 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.222 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.479 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.479 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:41.479 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.479 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.479 10:25:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.736 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.736 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:41.736 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.736 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.736 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:41.993 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.993 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:41.993 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.993 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.993 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.250 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.250 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:42.250 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.250 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.250 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:42.818 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.818 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:42.818 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.818 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.818 10:25:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.075 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.075 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:43.075 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.075 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.075 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.332 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.332 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:43.332 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.332 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.332 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.588 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.588 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:43.588 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.588 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.588 10:25:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:43.846 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.846 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:43.846 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.846 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.846 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.410 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.410 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:44.410 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.410 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.410 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.666 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.666 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:44.666 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.666 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.666 10:25:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:44.923 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.923 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:44.923 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:44.923 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.923 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.180 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.180 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:45.180 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.180 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.180 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:45.744 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.744 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:45.744 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:45.744 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.744 10:25:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.002 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.002 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:46.002 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.002 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.002 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.260 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.260 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:46.260 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.260 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.260 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.518 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.518 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:46.518 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.518 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.518 10:25:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:46.776 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.776 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:46.776 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:46.776 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.776 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:47.034 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2505889 00:14:47.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2505889) - No such process 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2505889 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:47.292 rmmod nvme_tcp 00:14:47.292 rmmod nvme_fabrics 00:14:47.292 rmmod nvme_keyring 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2505852 ']' 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2505852 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2505852 ']' 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2505852 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2505852 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:47.292 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2505852' 00:14:47.292 killing process with pid 2505852 00:14:47.293 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2505852 00:14:47.293 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2505852 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:47.552 10:25:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.094 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:50.094 00:14:50.094 real 0m15.597s 00:14:50.094 user 0m38.765s 00:14:50.094 sys 0m5.896s 00:14:50.094 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.094 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:50.094 ************************************ 00:14:50.094 END TEST nvmf_connect_stress 00:14:50.094 ************************************ 00:14:50.094 10:25:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:50.094 10:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:50.095 10:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.095 10:25:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:50.095 ************************************ 00:14:50.095 START TEST nvmf_fused_ordering 00:14:50.095 ************************************ 00:14:50.095 10:25:21 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:50.095 * Looking for test storage... 00:14:50.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:50.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.095 --rc genhtml_branch_coverage=1 00:14:50.095 --rc genhtml_function_coverage=1 00:14:50.095 --rc genhtml_legend=1 00:14:50.095 --rc geninfo_all_blocks=1 00:14:50.095 --rc geninfo_unexecuted_blocks=1 00:14:50.095 00:14:50.095 ' 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:50.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.095 --rc genhtml_branch_coverage=1 00:14:50.095 --rc genhtml_function_coverage=1 00:14:50.095 --rc genhtml_legend=1 00:14:50.095 --rc geninfo_all_blocks=1 00:14:50.095 --rc geninfo_unexecuted_blocks=1 00:14:50.095 00:14:50.095 ' 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:50.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.095 --rc genhtml_branch_coverage=1 00:14:50.095 --rc genhtml_function_coverage=1 00:14:50.095 --rc genhtml_legend=1 00:14:50.095 --rc geninfo_all_blocks=1 00:14:50.095 --rc geninfo_unexecuted_blocks=1 00:14:50.095 00:14:50.095 ' 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:50.095 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.095 --rc genhtml_branch_coverage=1 00:14:50.095 --rc genhtml_function_coverage=1 00:14:50.095 --rc genhtml_legend=1 00:14:50.095 --rc geninfo_all_blocks=1 00:14:50.095 --rc geninfo_unexecuted_blocks=1 00:14:50.095 00:14:50.095 ' 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:50.095 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:50.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:50.096 10:25:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:51.995 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:51.995 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:51.995 Found net devices under 0000:09:00.0: cvl_0_0 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:51.995 Found net devices under 0000:09:00.1: cvl_0_1 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:51.995 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:51.996 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:51.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:51.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:14:51.996 00:14:51.996 --- 10.0.0.2 ping statistics --- 00:14:51.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:51.996 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:14:52.254 00:14:52.254 --- 10.0.0.1 ping statistics --- 00:14:52.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.254 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2509149 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2509149 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2509149 ']' 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.254 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.254 [2024-12-09 10:25:24.522309] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:14:52.254 [2024-12-09 10:25:24.522401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.254 [2024-12-09 10:25:24.594956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.254 [2024-12-09 10:25:24.650648] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.254 [2024-12-09 10:25:24.650709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.254 [2024-12-09 10:25:24.650736] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.254 [2024-12-09 10:25:24.650747] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.254 [2024-12-09 10:25:24.650756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.254 [2024-12-09 10:25:24.651379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.511 [2024-12-09 10:25:24.813636] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.511 [2024-12-09 10:25:24.829823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.511 NULL1 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.511 10:25:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:52.511 [2024-12-09 10:25:24.873091] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:14:52.511 [2024-12-09 10:25:24.873151] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2509176 ] 00:14:53.081 Attached to nqn.2016-06.io.spdk:cnode1 00:14:53.081 Namespace ID: 1 size: 1GB 00:14:53.081 fused_ordering(0) 00:14:53.081 fused_ordering(1) 00:14:53.081 fused_ordering(2) 00:14:53.081 fused_ordering(3) 00:14:53.081 fused_ordering(4) 00:14:53.081 fused_ordering(5) 00:14:53.081 fused_ordering(6) 00:14:53.081 fused_ordering(7) 00:14:53.081 fused_ordering(8) 00:14:53.081 fused_ordering(9) 00:14:53.081 fused_ordering(10) 00:14:53.081 fused_ordering(11) 00:14:53.081 fused_ordering(12) 00:14:53.081 fused_ordering(13) 00:14:53.081 fused_ordering(14) 00:14:53.081 fused_ordering(15) 00:14:53.081 fused_ordering(16) 00:14:53.081 fused_ordering(17) 00:14:53.081 fused_ordering(18) 00:14:53.081 fused_ordering(19) 00:14:53.081 fused_ordering(20) 00:14:53.081 fused_ordering(21) 00:14:53.081 fused_ordering(22) 00:14:53.081 fused_ordering(23) 00:14:53.081 fused_ordering(24) 00:14:53.081 fused_ordering(25) 00:14:53.081 fused_ordering(26) 00:14:53.081 fused_ordering(27) 00:14:53.081 fused_ordering(28) 00:14:53.081 fused_ordering(29) 00:14:53.081 fused_ordering(30) 00:14:53.081 fused_ordering(31) 00:14:53.081 fused_ordering(32) 00:14:53.081 fused_ordering(33) 00:14:53.081 fused_ordering(34) 00:14:53.081 fused_ordering(35) 00:14:53.081 fused_ordering(36) 00:14:53.081 fused_ordering(37) 00:14:53.081 fused_ordering(38) 00:14:53.081 fused_ordering(39) 00:14:53.081 fused_ordering(40) 00:14:53.081 fused_ordering(41) 00:14:53.081 fused_ordering(42) 00:14:53.081 fused_ordering(43) 00:14:53.081 fused_ordering(44) 00:14:53.081 fused_ordering(45) 00:14:53.081 fused_ordering(46) 00:14:53.081 fused_ordering(47) 00:14:53.081 fused_ordering(48) 00:14:53.081 fused_ordering(49) 00:14:53.081 fused_ordering(50) 00:14:53.081 fused_ordering(51) 00:14:53.081 fused_ordering(52) 00:14:53.081 fused_ordering(53) 00:14:53.081 fused_ordering(54) 00:14:53.081 fused_ordering(55) 00:14:53.081 fused_ordering(56) 00:14:53.081 fused_ordering(57) 00:14:53.081 fused_ordering(58) 00:14:53.081 fused_ordering(59) 00:14:53.081 fused_ordering(60) 00:14:53.081 fused_ordering(61) 00:14:53.081 fused_ordering(62) 00:14:53.081 fused_ordering(63) 00:14:53.081 fused_ordering(64) 00:14:53.081 fused_ordering(65) 00:14:53.081 fused_ordering(66) 00:14:53.081 fused_ordering(67) 00:14:53.081 fused_ordering(68) 00:14:53.081 fused_ordering(69) 00:14:53.081 fused_ordering(70) 00:14:53.081 fused_ordering(71) 00:14:53.081 fused_ordering(72) 00:14:53.081 fused_ordering(73) 00:14:53.081 fused_ordering(74) 00:14:53.081 fused_ordering(75) 00:14:53.081 fused_ordering(76) 00:14:53.081 fused_ordering(77) 00:14:53.081 fused_ordering(78) 00:14:53.081 fused_ordering(79) 00:14:53.081 fused_ordering(80) 00:14:53.081 fused_ordering(81) 00:14:53.081 fused_ordering(82) 00:14:53.081 fused_ordering(83) 00:14:53.081 fused_ordering(84) 00:14:53.081 fused_ordering(85) 00:14:53.081 fused_ordering(86) 00:14:53.081 fused_ordering(87) 00:14:53.081 fused_ordering(88) 00:14:53.081 fused_ordering(89) 00:14:53.081 fused_ordering(90) 00:14:53.081 fused_ordering(91) 00:14:53.081 fused_ordering(92) 00:14:53.081 fused_ordering(93) 00:14:53.081 fused_ordering(94) 00:14:53.081 fused_ordering(95) 00:14:53.081 fused_ordering(96) 00:14:53.081 fused_ordering(97) 00:14:53.081 fused_ordering(98) 00:14:53.081 fused_ordering(99) 00:14:53.081 fused_ordering(100) 00:14:53.081 fused_ordering(101) 00:14:53.081 fused_ordering(102) 00:14:53.081 fused_ordering(103) 00:14:53.081 fused_ordering(104) 00:14:53.081 fused_ordering(105) 00:14:53.081 fused_ordering(106) 00:14:53.081 fused_ordering(107) 00:14:53.081 fused_ordering(108) 00:14:53.081 fused_ordering(109) 00:14:53.081 fused_ordering(110) 00:14:53.081 fused_ordering(111) 00:14:53.081 fused_ordering(112) 00:14:53.081 fused_ordering(113) 00:14:53.081 fused_ordering(114) 00:14:53.081 fused_ordering(115) 00:14:53.081 fused_ordering(116) 00:14:53.081 fused_ordering(117) 00:14:53.081 fused_ordering(118) 00:14:53.081 fused_ordering(119) 00:14:53.081 fused_ordering(120) 00:14:53.081 fused_ordering(121) 00:14:53.081 fused_ordering(122) 00:14:53.081 fused_ordering(123) 00:14:53.081 fused_ordering(124) 00:14:53.081 fused_ordering(125) 00:14:53.081 fused_ordering(126) 00:14:53.081 fused_ordering(127) 00:14:53.081 fused_ordering(128) 00:14:53.081 fused_ordering(129) 00:14:53.081 fused_ordering(130) 00:14:53.081 fused_ordering(131) 00:14:53.081 fused_ordering(132) 00:14:53.081 fused_ordering(133) 00:14:53.081 fused_ordering(134) 00:14:53.081 fused_ordering(135) 00:14:53.081 fused_ordering(136) 00:14:53.081 fused_ordering(137) 00:14:53.081 fused_ordering(138) 00:14:53.081 fused_ordering(139) 00:14:53.081 fused_ordering(140) 00:14:53.081 fused_ordering(141) 00:14:53.081 fused_ordering(142) 00:14:53.081 fused_ordering(143) 00:14:53.081 fused_ordering(144) 00:14:53.081 fused_ordering(145) 00:14:53.081 fused_ordering(146) 00:14:53.081 fused_ordering(147) 00:14:53.081 fused_ordering(148) 00:14:53.081 fused_ordering(149) 00:14:53.081 fused_ordering(150) 00:14:53.081 fused_ordering(151) 00:14:53.081 fused_ordering(152) 00:14:53.081 fused_ordering(153) 00:14:53.081 fused_ordering(154) 00:14:53.081 fused_ordering(155) 00:14:53.081 fused_ordering(156) 00:14:53.081 fused_ordering(157) 00:14:53.081 fused_ordering(158) 00:14:53.081 fused_ordering(159) 00:14:53.081 fused_ordering(160) 00:14:53.081 fused_ordering(161) 00:14:53.081 fused_ordering(162) 00:14:53.081 fused_ordering(163) 00:14:53.081 fused_ordering(164) 00:14:53.081 fused_ordering(165) 00:14:53.081 fused_ordering(166) 00:14:53.081 fused_ordering(167) 00:14:53.081 fused_ordering(168) 00:14:53.081 fused_ordering(169) 00:14:53.081 fused_ordering(170) 00:14:53.081 fused_ordering(171) 00:14:53.081 fused_ordering(172) 00:14:53.081 fused_ordering(173) 00:14:53.081 fused_ordering(174) 00:14:53.082 fused_ordering(175) 00:14:53.082 fused_ordering(176) 00:14:53.082 fused_ordering(177) 00:14:53.082 fused_ordering(178) 00:14:53.082 fused_ordering(179) 00:14:53.082 fused_ordering(180) 00:14:53.082 fused_ordering(181) 00:14:53.082 fused_ordering(182) 00:14:53.082 fused_ordering(183) 00:14:53.082 fused_ordering(184) 00:14:53.082 fused_ordering(185) 00:14:53.082 fused_ordering(186) 00:14:53.082 fused_ordering(187) 00:14:53.082 fused_ordering(188) 00:14:53.082 fused_ordering(189) 00:14:53.082 fused_ordering(190) 00:14:53.082 fused_ordering(191) 00:14:53.082 fused_ordering(192) 00:14:53.082 fused_ordering(193) 00:14:53.082 fused_ordering(194) 00:14:53.082 fused_ordering(195) 00:14:53.082 fused_ordering(196) 00:14:53.082 fused_ordering(197) 00:14:53.082 fused_ordering(198) 00:14:53.082 fused_ordering(199) 00:14:53.082 fused_ordering(200) 00:14:53.082 fused_ordering(201) 00:14:53.082 fused_ordering(202) 00:14:53.082 fused_ordering(203) 00:14:53.082 fused_ordering(204) 00:14:53.082 fused_ordering(205) 00:14:53.338 fused_ordering(206) 00:14:53.338 fused_ordering(207) 00:14:53.338 fused_ordering(208) 00:14:53.338 fused_ordering(209) 00:14:53.338 fused_ordering(210) 00:14:53.338 fused_ordering(211) 00:14:53.338 fused_ordering(212) 00:14:53.338 fused_ordering(213) 00:14:53.338 fused_ordering(214) 00:14:53.338 fused_ordering(215) 00:14:53.338 fused_ordering(216) 00:14:53.338 fused_ordering(217) 00:14:53.338 fused_ordering(218) 00:14:53.338 fused_ordering(219) 00:14:53.338 fused_ordering(220) 00:14:53.338 fused_ordering(221) 00:14:53.338 fused_ordering(222) 00:14:53.338 fused_ordering(223) 00:14:53.338 fused_ordering(224) 00:14:53.338 fused_ordering(225) 00:14:53.338 fused_ordering(226) 00:14:53.338 fused_ordering(227) 00:14:53.338 fused_ordering(228) 00:14:53.338 fused_ordering(229) 00:14:53.338 fused_ordering(230) 00:14:53.338 fused_ordering(231) 00:14:53.338 fused_ordering(232) 00:14:53.338 fused_ordering(233) 00:14:53.338 fused_ordering(234) 00:14:53.338 fused_ordering(235) 00:14:53.338 fused_ordering(236) 00:14:53.338 fused_ordering(237) 00:14:53.338 fused_ordering(238) 00:14:53.338 fused_ordering(239) 00:14:53.338 fused_ordering(240) 00:14:53.338 fused_ordering(241) 00:14:53.338 fused_ordering(242) 00:14:53.338 fused_ordering(243) 00:14:53.338 fused_ordering(244) 00:14:53.338 fused_ordering(245) 00:14:53.338 fused_ordering(246) 00:14:53.338 fused_ordering(247) 00:14:53.338 fused_ordering(248) 00:14:53.338 fused_ordering(249) 00:14:53.338 fused_ordering(250) 00:14:53.338 fused_ordering(251) 00:14:53.338 fused_ordering(252) 00:14:53.338 fused_ordering(253) 00:14:53.338 fused_ordering(254) 00:14:53.338 fused_ordering(255) 00:14:53.338 fused_ordering(256) 00:14:53.338 fused_ordering(257) 00:14:53.338 fused_ordering(258) 00:14:53.338 fused_ordering(259) 00:14:53.338 fused_ordering(260) 00:14:53.338 fused_ordering(261) 00:14:53.339 fused_ordering(262) 00:14:53.339 fused_ordering(263) 00:14:53.339 fused_ordering(264) 00:14:53.339 fused_ordering(265) 00:14:53.339 fused_ordering(266) 00:14:53.339 fused_ordering(267) 00:14:53.339 fused_ordering(268) 00:14:53.339 fused_ordering(269) 00:14:53.339 fused_ordering(270) 00:14:53.339 fused_ordering(271) 00:14:53.339 fused_ordering(272) 00:14:53.339 fused_ordering(273) 00:14:53.339 fused_ordering(274) 00:14:53.339 fused_ordering(275) 00:14:53.339 fused_ordering(276) 00:14:53.339 fused_ordering(277) 00:14:53.339 fused_ordering(278) 00:14:53.339 fused_ordering(279) 00:14:53.339 fused_ordering(280) 00:14:53.339 fused_ordering(281) 00:14:53.339 fused_ordering(282) 00:14:53.339 fused_ordering(283) 00:14:53.339 fused_ordering(284) 00:14:53.339 fused_ordering(285) 00:14:53.339 fused_ordering(286) 00:14:53.339 fused_ordering(287) 00:14:53.339 fused_ordering(288) 00:14:53.339 fused_ordering(289) 00:14:53.339 fused_ordering(290) 00:14:53.339 fused_ordering(291) 00:14:53.339 fused_ordering(292) 00:14:53.339 fused_ordering(293) 00:14:53.339 fused_ordering(294) 00:14:53.339 fused_ordering(295) 00:14:53.339 fused_ordering(296) 00:14:53.339 fused_ordering(297) 00:14:53.339 fused_ordering(298) 00:14:53.339 fused_ordering(299) 00:14:53.339 fused_ordering(300) 00:14:53.339 fused_ordering(301) 00:14:53.339 fused_ordering(302) 00:14:53.339 fused_ordering(303) 00:14:53.339 fused_ordering(304) 00:14:53.339 fused_ordering(305) 00:14:53.339 fused_ordering(306) 00:14:53.339 fused_ordering(307) 00:14:53.339 fused_ordering(308) 00:14:53.339 fused_ordering(309) 00:14:53.339 fused_ordering(310) 00:14:53.339 fused_ordering(311) 00:14:53.339 fused_ordering(312) 00:14:53.339 fused_ordering(313) 00:14:53.339 fused_ordering(314) 00:14:53.339 fused_ordering(315) 00:14:53.339 fused_ordering(316) 00:14:53.339 fused_ordering(317) 00:14:53.339 fused_ordering(318) 00:14:53.339 fused_ordering(319) 00:14:53.339 fused_ordering(320) 00:14:53.339 fused_ordering(321) 00:14:53.339 fused_ordering(322) 00:14:53.339 fused_ordering(323) 00:14:53.339 fused_ordering(324) 00:14:53.339 fused_ordering(325) 00:14:53.339 fused_ordering(326) 00:14:53.339 fused_ordering(327) 00:14:53.339 fused_ordering(328) 00:14:53.339 fused_ordering(329) 00:14:53.339 fused_ordering(330) 00:14:53.339 fused_ordering(331) 00:14:53.339 fused_ordering(332) 00:14:53.339 fused_ordering(333) 00:14:53.339 fused_ordering(334) 00:14:53.339 fused_ordering(335) 00:14:53.339 fused_ordering(336) 00:14:53.339 fused_ordering(337) 00:14:53.339 fused_ordering(338) 00:14:53.339 fused_ordering(339) 00:14:53.339 fused_ordering(340) 00:14:53.339 fused_ordering(341) 00:14:53.339 fused_ordering(342) 00:14:53.339 fused_ordering(343) 00:14:53.339 fused_ordering(344) 00:14:53.339 fused_ordering(345) 00:14:53.339 fused_ordering(346) 00:14:53.339 fused_ordering(347) 00:14:53.339 fused_ordering(348) 00:14:53.339 fused_ordering(349) 00:14:53.339 fused_ordering(350) 00:14:53.339 fused_ordering(351) 00:14:53.339 fused_ordering(352) 00:14:53.339 fused_ordering(353) 00:14:53.339 fused_ordering(354) 00:14:53.339 fused_ordering(355) 00:14:53.339 fused_ordering(356) 00:14:53.339 fused_ordering(357) 00:14:53.339 fused_ordering(358) 00:14:53.339 fused_ordering(359) 00:14:53.339 fused_ordering(360) 00:14:53.339 fused_ordering(361) 00:14:53.339 fused_ordering(362) 00:14:53.339 fused_ordering(363) 00:14:53.339 fused_ordering(364) 00:14:53.339 fused_ordering(365) 00:14:53.339 fused_ordering(366) 00:14:53.339 fused_ordering(367) 00:14:53.339 fused_ordering(368) 00:14:53.339 fused_ordering(369) 00:14:53.339 fused_ordering(370) 00:14:53.339 fused_ordering(371) 00:14:53.339 fused_ordering(372) 00:14:53.339 fused_ordering(373) 00:14:53.339 fused_ordering(374) 00:14:53.339 fused_ordering(375) 00:14:53.339 fused_ordering(376) 00:14:53.339 fused_ordering(377) 00:14:53.339 fused_ordering(378) 00:14:53.339 fused_ordering(379) 00:14:53.339 fused_ordering(380) 00:14:53.339 fused_ordering(381) 00:14:53.339 fused_ordering(382) 00:14:53.339 fused_ordering(383) 00:14:53.339 fused_ordering(384) 00:14:53.339 fused_ordering(385) 00:14:53.339 fused_ordering(386) 00:14:53.339 fused_ordering(387) 00:14:53.339 fused_ordering(388) 00:14:53.339 fused_ordering(389) 00:14:53.339 fused_ordering(390) 00:14:53.339 fused_ordering(391) 00:14:53.339 fused_ordering(392) 00:14:53.339 fused_ordering(393) 00:14:53.339 fused_ordering(394) 00:14:53.339 fused_ordering(395) 00:14:53.339 fused_ordering(396) 00:14:53.339 fused_ordering(397) 00:14:53.339 fused_ordering(398) 00:14:53.339 fused_ordering(399) 00:14:53.339 fused_ordering(400) 00:14:53.339 fused_ordering(401) 00:14:53.339 fused_ordering(402) 00:14:53.339 fused_ordering(403) 00:14:53.339 fused_ordering(404) 00:14:53.339 fused_ordering(405) 00:14:53.339 fused_ordering(406) 00:14:53.339 fused_ordering(407) 00:14:53.339 fused_ordering(408) 00:14:53.339 fused_ordering(409) 00:14:53.339 fused_ordering(410) 00:14:53.902 fused_ordering(411) 00:14:53.902 fused_ordering(412) 00:14:53.902 fused_ordering(413) 00:14:53.902 fused_ordering(414) 00:14:53.902 fused_ordering(415) 00:14:53.902 fused_ordering(416) 00:14:53.902 fused_ordering(417) 00:14:53.902 fused_ordering(418) 00:14:53.902 fused_ordering(419) 00:14:53.902 fused_ordering(420) 00:14:53.902 fused_ordering(421) 00:14:53.902 fused_ordering(422) 00:14:53.902 fused_ordering(423) 00:14:53.902 fused_ordering(424) 00:14:53.902 fused_ordering(425) 00:14:53.902 fused_ordering(426) 00:14:53.902 fused_ordering(427) 00:14:53.902 fused_ordering(428) 00:14:53.902 fused_ordering(429) 00:14:53.902 fused_ordering(430) 00:14:53.902 fused_ordering(431) 00:14:53.902 fused_ordering(432) 00:14:53.902 fused_ordering(433) 00:14:53.902 fused_ordering(434) 00:14:53.902 fused_ordering(435) 00:14:53.902 fused_ordering(436) 00:14:53.902 fused_ordering(437) 00:14:53.902 fused_ordering(438) 00:14:53.902 fused_ordering(439) 00:14:53.902 fused_ordering(440) 00:14:53.902 fused_ordering(441) 00:14:53.902 fused_ordering(442) 00:14:53.902 fused_ordering(443) 00:14:53.902 fused_ordering(444) 00:14:53.902 fused_ordering(445) 00:14:53.902 fused_ordering(446) 00:14:53.902 fused_ordering(447) 00:14:53.902 fused_ordering(448) 00:14:53.902 fused_ordering(449) 00:14:53.902 fused_ordering(450) 00:14:53.902 fused_ordering(451) 00:14:53.902 fused_ordering(452) 00:14:53.902 fused_ordering(453) 00:14:53.902 fused_ordering(454) 00:14:53.902 fused_ordering(455) 00:14:53.902 fused_ordering(456) 00:14:53.902 fused_ordering(457) 00:14:53.902 fused_ordering(458) 00:14:53.902 fused_ordering(459) 00:14:53.902 fused_ordering(460) 00:14:53.902 fused_ordering(461) 00:14:53.902 fused_ordering(462) 00:14:53.902 fused_ordering(463) 00:14:53.902 fused_ordering(464) 00:14:53.902 fused_ordering(465) 00:14:53.902 fused_ordering(466) 00:14:53.902 fused_ordering(467) 00:14:53.902 fused_ordering(468) 00:14:53.902 fused_ordering(469) 00:14:53.902 fused_ordering(470) 00:14:53.902 fused_ordering(471) 00:14:53.902 fused_ordering(472) 00:14:53.902 fused_ordering(473) 00:14:53.902 fused_ordering(474) 00:14:53.902 fused_ordering(475) 00:14:53.902 fused_ordering(476) 00:14:53.902 fused_ordering(477) 00:14:53.902 fused_ordering(478) 00:14:53.902 fused_ordering(479) 00:14:53.902 fused_ordering(480) 00:14:53.902 fused_ordering(481) 00:14:53.902 fused_ordering(482) 00:14:53.902 fused_ordering(483) 00:14:53.902 fused_ordering(484) 00:14:53.902 fused_ordering(485) 00:14:53.902 fused_ordering(486) 00:14:53.902 fused_ordering(487) 00:14:53.902 fused_ordering(488) 00:14:53.902 fused_ordering(489) 00:14:53.902 fused_ordering(490) 00:14:53.902 fused_ordering(491) 00:14:53.902 fused_ordering(492) 00:14:53.902 fused_ordering(493) 00:14:53.902 fused_ordering(494) 00:14:53.902 fused_ordering(495) 00:14:53.902 fused_ordering(496) 00:14:53.902 fused_ordering(497) 00:14:53.902 fused_ordering(498) 00:14:53.902 fused_ordering(499) 00:14:53.902 fused_ordering(500) 00:14:53.902 fused_ordering(501) 00:14:53.902 fused_ordering(502) 00:14:53.902 fused_ordering(503) 00:14:53.902 fused_ordering(504) 00:14:53.902 fused_ordering(505) 00:14:53.902 fused_ordering(506) 00:14:53.902 fused_ordering(507) 00:14:53.902 fused_ordering(508) 00:14:53.902 fused_ordering(509) 00:14:53.902 fused_ordering(510) 00:14:53.902 fused_ordering(511) 00:14:53.902 fused_ordering(512) 00:14:53.902 fused_ordering(513) 00:14:53.902 fused_ordering(514) 00:14:53.902 fused_ordering(515) 00:14:53.902 fused_ordering(516) 00:14:53.902 fused_ordering(517) 00:14:53.902 fused_ordering(518) 00:14:53.902 fused_ordering(519) 00:14:53.902 fused_ordering(520) 00:14:53.902 fused_ordering(521) 00:14:53.902 fused_ordering(522) 00:14:53.902 fused_ordering(523) 00:14:53.902 fused_ordering(524) 00:14:53.902 fused_ordering(525) 00:14:53.902 fused_ordering(526) 00:14:53.902 fused_ordering(527) 00:14:53.902 fused_ordering(528) 00:14:53.902 fused_ordering(529) 00:14:53.902 fused_ordering(530) 00:14:53.902 fused_ordering(531) 00:14:53.902 fused_ordering(532) 00:14:53.902 fused_ordering(533) 00:14:53.902 fused_ordering(534) 00:14:53.902 fused_ordering(535) 00:14:53.902 fused_ordering(536) 00:14:53.902 fused_ordering(537) 00:14:53.902 fused_ordering(538) 00:14:53.902 fused_ordering(539) 00:14:53.902 fused_ordering(540) 00:14:53.902 fused_ordering(541) 00:14:53.902 fused_ordering(542) 00:14:53.902 fused_ordering(543) 00:14:53.902 fused_ordering(544) 00:14:53.902 fused_ordering(545) 00:14:53.902 fused_ordering(546) 00:14:53.903 fused_ordering(547) 00:14:53.903 fused_ordering(548) 00:14:53.903 fused_ordering(549) 00:14:53.903 fused_ordering(550) 00:14:53.903 fused_ordering(551) 00:14:53.903 fused_ordering(552) 00:14:53.903 fused_ordering(553) 00:14:53.903 fused_ordering(554) 00:14:53.903 fused_ordering(555) 00:14:53.903 fused_ordering(556) 00:14:53.903 fused_ordering(557) 00:14:53.903 fused_ordering(558) 00:14:53.903 fused_ordering(559) 00:14:53.903 fused_ordering(560) 00:14:53.903 fused_ordering(561) 00:14:53.903 fused_ordering(562) 00:14:53.903 fused_ordering(563) 00:14:53.903 fused_ordering(564) 00:14:53.903 fused_ordering(565) 00:14:53.903 fused_ordering(566) 00:14:53.903 fused_ordering(567) 00:14:53.903 fused_ordering(568) 00:14:53.903 fused_ordering(569) 00:14:53.903 fused_ordering(570) 00:14:53.903 fused_ordering(571) 00:14:53.903 fused_ordering(572) 00:14:53.903 fused_ordering(573) 00:14:53.903 fused_ordering(574) 00:14:53.903 fused_ordering(575) 00:14:53.903 fused_ordering(576) 00:14:53.903 fused_ordering(577) 00:14:53.903 fused_ordering(578) 00:14:53.903 fused_ordering(579) 00:14:53.903 fused_ordering(580) 00:14:53.903 fused_ordering(581) 00:14:53.903 fused_ordering(582) 00:14:53.903 fused_ordering(583) 00:14:53.903 fused_ordering(584) 00:14:53.903 fused_ordering(585) 00:14:53.903 fused_ordering(586) 00:14:53.903 fused_ordering(587) 00:14:53.903 fused_ordering(588) 00:14:53.903 fused_ordering(589) 00:14:53.903 fused_ordering(590) 00:14:53.903 fused_ordering(591) 00:14:53.903 fused_ordering(592) 00:14:53.903 fused_ordering(593) 00:14:53.903 fused_ordering(594) 00:14:53.903 fused_ordering(595) 00:14:53.903 fused_ordering(596) 00:14:53.903 fused_ordering(597) 00:14:53.903 fused_ordering(598) 00:14:53.903 fused_ordering(599) 00:14:53.903 fused_ordering(600) 00:14:53.903 fused_ordering(601) 00:14:53.903 fused_ordering(602) 00:14:53.903 fused_ordering(603) 00:14:53.903 fused_ordering(604) 00:14:53.903 fused_ordering(605) 00:14:53.903 fused_ordering(606) 00:14:53.903 fused_ordering(607) 00:14:53.903 fused_ordering(608) 00:14:53.903 fused_ordering(609) 00:14:53.903 fused_ordering(610) 00:14:53.903 fused_ordering(611) 00:14:53.903 fused_ordering(612) 00:14:53.903 fused_ordering(613) 00:14:53.903 fused_ordering(614) 00:14:53.903 fused_ordering(615) 00:14:54.160 fused_ordering(616) 00:14:54.160 fused_ordering(617) 00:14:54.160 fused_ordering(618) 00:14:54.160 fused_ordering(619) 00:14:54.160 fused_ordering(620) 00:14:54.160 fused_ordering(621) 00:14:54.160 fused_ordering(622) 00:14:54.160 fused_ordering(623) 00:14:54.160 fused_ordering(624) 00:14:54.160 fused_ordering(625) 00:14:54.160 fused_ordering(626) 00:14:54.160 fused_ordering(627) 00:14:54.160 fused_ordering(628) 00:14:54.160 fused_ordering(629) 00:14:54.160 fused_ordering(630) 00:14:54.160 fused_ordering(631) 00:14:54.160 fused_ordering(632) 00:14:54.160 fused_ordering(633) 00:14:54.160 fused_ordering(634) 00:14:54.160 fused_ordering(635) 00:14:54.160 fused_ordering(636) 00:14:54.160 fused_ordering(637) 00:14:54.160 fused_ordering(638) 00:14:54.160 fused_ordering(639) 00:14:54.160 fused_ordering(640) 00:14:54.160 fused_ordering(641) 00:14:54.160 fused_ordering(642) 00:14:54.160 fused_ordering(643) 00:14:54.160 fused_ordering(644) 00:14:54.160 fused_ordering(645) 00:14:54.160 fused_ordering(646) 00:14:54.160 fused_ordering(647) 00:14:54.160 fused_ordering(648) 00:14:54.160 fused_ordering(649) 00:14:54.160 fused_ordering(650) 00:14:54.160 fused_ordering(651) 00:14:54.160 fused_ordering(652) 00:14:54.160 fused_ordering(653) 00:14:54.160 fused_ordering(654) 00:14:54.160 fused_ordering(655) 00:14:54.160 fused_ordering(656) 00:14:54.160 fused_ordering(657) 00:14:54.160 fused_ordering(658) 00:14:54.160 fused_ordering(659) 00:14:54.160 fused_ordering(660) 00:14:54.160 fused_ordering(661) 00:14:54.160 fused_ordering(662) 00:14:54.160 fused_ordering(663) 00:14:54.160 fused_ordering(664) 00:14:54.160 fused_ordering(665) 00:14:54.160 fused_ordering(666) 00:14:54.160 fused_ordering(667) 00:14:54.160 fused_ordering(668) 00:14:54.160 fused_ordering(669) 00:14:54.160 fused_ordering(670) 00:14:54.160 fused_ordering(671) 00:14:54.160 fused_ordering(672) 00:14:54.160 fused_ordering(673) 00:14:54.160 fused_ordering(674) 00:14:54.160 fused_ordering(675) 00:14:54.160 fused_ordering(676) 00:14:54.160 fused_ordering(677) 00:14:54.160 fused_ordering(678) 00:14:54.160 fused_ordering(679) 00:14:54.160 fused_ordering(680) 00:14:54.161 fused_ordering(681) 00:14:54.161 fused_ordering(682) 00:14:54.161 fused_ordering(683) 00:14:54.161 fused_ordering(684) 00:14:54.161 fused_ordering(685) 00:14:54.161 fused_ordering(686) 00:14:54.161 fused_ordering(687) 00:14:54.161 fused_ordering(688) 00:14:54.161 fused_ordering(689) 00:14:54.161 fused_ordering(690) 00:14:54.161 fused_ordering(691) 00:14:54.161 fused_ordering(692) 00:14:54.161 fused_ordering(693) 00:14:54.161 fused_ordering(694) 00:14:54.161 fused_ordering(695) 00:14:54.161 fused_ordering(696) 00:14:54.161 fused_ordering(697) 00:14:54.161 fused_ordering(698) 00:14:54.161 fused_ordering(699) 00:14:54.161 fused_ordering(700) 00:14:54.161 fused_ordering(701) 00:14:54.161 fused_ordering(702) 00:14:54.161 fused_ordering(703) 00:14:54.161 fused_ordering(704) 00:14:54.161 fused_ordering(705) 00:14:54.161 fused_ordering(706) 00:14:54.161 fused_ordering(707) 00:14:54.161 fused_ordering(708) 00:14:54.161 fused_ordering(709) 00:14:54.161 fused_ordering(710) 00:14:54.161 fused_ordering(711) 00:14:54.161 fused_ordering(712) 00:14:54.161 fused_ordering(713) 00:14:54.161 fused_ordering(714) 00:14:54.161 fused_ordering(715) 00:14:54.161 fused_ordering(716) 00:14:54.161 fused_ordering(717) 00:14:54.161 fused_ordering(718) 00:14:54.161 fused_ordering(719) 00:14:54.161 fused_ordering(720) 00:14:54.161 fused_ordering(721) 00:14:54.161 fused_ordering(722) 00:14:54.161 fused_ordering(723) 00:14:54.161 fused_ordering(724) 00:14:54.161 fused_ordering(725) 00:14:54.161 fused_ordering(726) 00:14:54.161 fused_ordering(727) 00:14:54.161 fused_ordering(728) 00:14:54.161 fused_ordering(729) 00:14:54.161 fused_ordering(730) 00:14:54.161 fused_ordering(731) 00:14:54.161 fused_ordering(732) 00:14:54.161 fused_ordering(733) 00:14:54.161 fused_ordering(734) 00:14:54.161 fused_ordering(735) 00:14:54.161 fused_ordering(736) 00:14:54.161 fused_ordering(737) 00:14:54.161 fused_ordering(738) 00:14:54.161 fused_ordering(739) 00:14:54.161 fused_ordering(740) 00:14:54.161 fused_ordering(741) 00:14:54.161 fused_ordering(742) 00:14:54.161 fused_ordering(743) 00:14:54.161 fused_ordering(744) 00:14:54.161 fused_ordering(745) 00:14:54.161 fused_ordering(746) 00:14:54.161 fused_ordering(747) 00:14:54.161 fused_ordering(748) 00:14:54.161 fused_ordering(749) 00:14:54.161 fused_ordering(750) 00:14:54.161 fused_ordering(751) 00:14:54.161 fused_ordering(752) 00:14:54.161 fused_ordering(753) 00:14:54.161 fused_ordering(754) 00:14:54.161 fused_ordering(755) 00:14:54.161 fused_ordering(756) 00:14:54.161 fused_ordering(757) 00:14:54.161 fused_ordering(758) 00:14:54.161 fused_ordering(759) 00:14:54.161 fused_ordering(760) 00:14:54.161 fused_ordering(761) 00:14:54.161 fused_ordering(762) 00:14:54.161 fused_ordering(763) 00:14:54.161 fused_ordering(764) 00:14:54.161 fused_ordering(765) 00:14:54.161 fused_ordering(766) 00:14:54.161 fused_ordering(767) 00:14:54.161 fused_ordering(768) 00:14:54.161 fused_ordering(769) 00:14:54.161 fused_ordering(770) 00:14:54.161 fused_ordering(771) 00:14:54.161 fused_ordering(772) 00:14:54.161 fused_ordering(773) 00:14:54.161 fused_ordering(774) 00:14:54.161 fused_ordering(775) 00:14:54.161 fused_ordering(776) 00:14:54.161 fused_ordering(777) 00:14:54.161 fused_ordering(778) 00:14:54.161 fused_ordering(779) 00:14:54.161 fused_ordering(780) 00:14:54.161 fused_ordering(781) 00:14:54.161 fused_ordering(782) 00:14:54.161 fused_ordering(783) 00:14:54.161 fused_ordering(784) 00:14:54.161 fused_ordering(785) 00:14:54.161 fused_ordering(786) 00:14:54.161 fused_ordering(787) 00:14:54.161 fused_ordering(788) 00:14:54.161 fused_ordering(789) 00:14:54.161 fused_ordering(790) 00:14:54.161 fused_ordering(791) 00:14:54.161 fused_ordering(792) 00:14:54.161 fused_ordering(793) 00:14:54.161 fused_ordering(794) 00:14:54.161 fused_ordering(795) 00:14:54.161 fused_ordering(796) 00:14:54.161 fused_ordering(797) 00:14:54.161 fused_ordering(798) 00:14:54.161 fused_ordering(799) 00:14:54.161 fused_ordering(800) 00:14:54.161 fused_ordering(801) 00:14:54.161 fused_ordering(802) 00:14:54.161 fused_ordering(803) 00:14:54.161 fused_ordering(804) 00:14:54.161 fused_ordering(805) 00:14:54.161 fused_ordering(806) 00:14:54.161 fused_ordering(807) 00:14:54.161 fused_ordering(808) 00:14:54.161 fused_ordering(809) 00:14:54.161 fused_ordering(810) 00:14:54.161 fused_ordering(811) 00:14:54.161 fused_ordering(812) 00:14:54.161 fused_ordering(813) 00:14:54.161 fused_ordering(814) 00:14:54.161 fused_ordering(815) 00:14:54.161 fused_ordering(816) 00:14:54.161 fused_ordering(817) 00:14:54.161 fused_ordering(818) 00:14:54.161 fused_ordering(819) 00:14:54.161 fused_ordering(820) 00:14:55.095 fused_ordering(821) 00:14:55.095 fused_ordering(822) 00:14:55.095 fused_ordering(823) 00:14:55.095 fused_ordering(824) 00:14:55.095 fused_ordering(825) 00:14:55.095 fused_ordering(826) 00:14:55.095 fused_ordering(827) 00:14:55.095 fused_ordering(828) 00:14:55.095 fused_ordering(829) 00:14:55.095 fused_ordering(830) 00:14:55.095 fused_ordering(831) 00:14:55.095 fused_ordering(832) 00:14:55.095 fused_ordering(833) 00:14:55.095 fused_ordering(834) 00:14:55.095 fused_ordering(835) 00:14:55.095 fused_ordering(836) 00:14:55.095 fused_ordering(837) 00:14:55.095 fused_ordering(838) 00:14:55.095 fused_ordering(839) 00:14:55.095 fused_ordering(840) 00:14:55.095 fused_ordering(841) 00:14:55.095 fused_ordering(842) 00:14:55.095 fused_ordering(843) 00:14:55.095 fused_ordering(844) 00:14:55.095 fused_ordering(845) 00:14:55.095 fused_ordering(846) 00:14:55.095 fused_ordering(847) 00:14:55.095 fused_ordering(848) 00:14:55.095 fused_ordering(849) 00:14:55.095 fused_ordering(850) 00:14:55.095 fused_ordering(851) 00:14:55.095 fused_ordering(852) 00:14:55.095 fused_ordering(853) 00:14:55.095 fused_ordering(854) 00:14:55.095 fused_ordering(855) 00:14:55.095 fused_ordering(856) 00:14:55.095 fused_ordering(857) 00:14:55.095 fused_ordering(858) 00:14:55.095 fused_ordering(859) 00:14:55.095 fused_ordering(860) 00:14:55.096 fused_ordering(861) 00:14:55.096 fused_ordering(862) 00:14:55.096 fused_ordering(863) 00:14:55.096 fused_ordering(864) 00:14:55.096 fused_ordering(865) 00:14:55.096 fused_ordering(866) 00:14:55.096 fused_ordering(867) 00:14:55.096 fused_ordering(868) 00:14:55.096 fused_ordering(869) 00:14:55.096 fused_ordering(870) 00:14:55.096 fused_ordering(871) 00:14:55.096 fused_ordering(872) 00:14:55.096 fused_ordering(873) 00:14:55.096 fused_ordering(874) 00:14:55.096 fused_ordering(875) 00:14:55.096 fused_ordering(876) 00:14:55.096 fused_ordering(877) 00:14:55.096 fused_ordering(878) 00:14:55.096 fused_ordering(879) 00:14:55.096 fused_ordering(880) 00:14:55.096 fused_ordering(881) 00:14:55.096 fused_ordering(882) 00:14:55.096 fused_ordering(883) 00:14:55.096 fused_ordering(884) 00:14:55.096 fused_ordering(885) 00:14:55.096 fused_ordering(886) 00:14:55.096 fused_ordering(887) 00:14:55.096 fused_ordering(888) 00:14:55.096 fused_ordering(889) 00:14:55.096 fused_ordering(890) 00:14:55.096 fused_ordering(891) 00:14:55.096 fused_ordering(892) 00:14:55.096 fused_ordering(893) 00:14:55.096 fused_ordering(894) 00:14:55.096 fused_ordering(895) 00:14:55.096 fused_ordering(896) 00:14:55.096 fused_ordering(897) 00:14:55.096 fused_ordering(898) 00:14:55.096 fused_ordering(899) 00:14:55.096 fused_ordering(900) 00:14:55.096 fused_ordering(901) 00:14:55.096 fused_ordering(902) 00:14:55.096 fused_ordering(903) 00:14:55.096 fused_ordering(904) 00:14:55.096 fused_ordering(905) 00:14:55.096 fused_ordering(906) 00:14:55.096 fused_ordering(907) 00:14:55.096 fused_ordering(908) 00:14:55.096 fused_ordering(909) 00:14:55.096 fused_ordering(910) 00:14:55.096 fused_ordering(911) 00:14:55.096 fused_ordering(912) 00:14:55.096 fused_ordering(913) 00:14:55.096 fused_ordering(914) 00:14:55.096 fused_ordering(915) 00:14:55.096 fused_ordering(916) 00:14:55.096 fused_ordering(917) 00:14:55.096 fused_ordering(918) 00:14:55.096 fused_ordering(919) 00:14:55.096 fused_ordering(920) 00:14:55.096 fused_ordering(921) 00:14:55.096 fused_ordering(922) 00:14:55.096 fused_ordering(923) 00:14:55.096 fused_ordering(924) 00:14:55.096 fused_ordering(925) 00:14:55.096 fused_ordering(926) 00:14:55.096 fused_ordering(927) 00:14:55.096 fused_ordering(928) 00:14:55.096 fused_ordering(929) 00:14:55.096 fused_ordering(930) 00:14:55.096 fused_ordering(931) 00:14:55.096 fused_ordering(932) 00:14:55.096 fused_ordering(933) 00:14:55.096 fused_ordering(934) 00:14:55.096 fused_ordering(935) 00:14:55.096 fused_ordering(936) 00:14:55.096 fused_ordering(937) 00:14:55.096 fused_ordering(938) 00:14:55.096 fused_ordering(939) 00:14:55.096 fused_ordering(940) 00:14:55.096 fused_ordering(941) 00:14:55.096 fused_ordering(942) 00:14:55.096 fused_ordering(943) 00:14:55.096 fused_ordering(944) 00:14:55.096 fused_ordering(945) 00:14:55.096 fused_ordering(946) 00:14:55.096 fused_ordering(947) 00:14:55.096 fused_ordering(948) 00:14:55.096 fused_ordering(949) 00:14:55.096 fused_ordering(950) 00:14:55.096 fused_ordering(951) 00:14:55.096 fused_ordering(952) 00:14:55.096 fused_ordering(953) 00:14:55.096 fused_ordering(954) 00:14:55.096 fused_ordering(955) 00:14:55.096 fused_ordering(956) 00:14:55.096 fused_ordering(957) 00:14:55.096 fused_ordering(958) 00:14:55.096 fused_ordering(959) 00:14:55.096 fused_ordering(960) 00:14:55.096 fused_ordering(961) 00:14:55.096 fused_ordering(962) 00:14:55.096 fused_ordering(963) 00:14:55.096 fused_ordering(964) 00:14:55.096 fused_ordering(965) 00:14:55.096 fused_ordering(966) 00:14:55.096 fused_ordering(967) 00:14:55.096 fused_ordering(968) 00:14:55.096 fused_ordering(969) 00:14:55.096 fused_ordering(970) 00:14:55.096 fused_ordering(971) 00:14:55.096 fused_ordering(972) 00:14:55.096 fused_ordering(973) 00:14:55.096 fused_ordering(974) 00:14:55.096 fused_ordering(975) 00:14:55.096 fused_ordering(976) 00:14:55.096 fused_ordering(977) 00:14:55.096 fused_ordering(978) 00:14:55.096 fused_ordering(979) 00:14:55.096 fused_ordering(980) 00:14:55.096 fused_ordering(981) 00:14:55.096 fused_ordering(982) 00:14:55.096 fused_ordering(983) 00:14:55.096 fused_ordering(984) 00:14:55.096 fused_ordering(985) 00:14:55.096 fused_ordering(986) 00:14:55.096 fused_ordering(987) 00:14:55.096 fused_ordering(988) 00:14:55.096 fused_ordering(989) 00:14:55.096 fused_ordering(990) 00:14:55.096 fused_ordering(991) 00:14:55.096 fused_ordering(992) 00:14:55.096 fused_ordering(993) 00:14:55.096 fused_ordering(994) 00:14:55.096 fused_ordering(995) 00:14:55.096 fused_ordering(996) 00:14:55.096 fused_ordering(997) 00:14:55.096 fused_ordering(998) 00:14:55.096 fused_ordering(999) 00:14:55.096 fused_ordering(1000) 00:14:55.096 fused_ordering(1001) 00:14:55.096 fused_ordering(1002) 00:14:55.096 fused_ordering(1003) 00:14:55.096 fused_ordering(1004) 00:14:55.096 fused_ordering(1005) 00:14:55.096 fused_ordering(1006) 00:14:55.096 fused_ordering(1007) 00:14:55.096 fused_ordering(1008) 00:14:55.096 fused_ordering(1009) 00:14:55.096 fused_ordering(1010) 00:14:55.096 fused_ordering(1011) 00:14:55.096 fused_ordering(1012) 00:14:55.096 fused_ordering(1013) 00:14:55.096 fused_ordering(1014) 00:14:55.096 fused_ordering(1015) 00:14:55.096 fused_ordering(1016) 00:14:55.096 fused_ordering(1017) 00:14:55.096 fused_ordering(1018) 00:14:55.096 fused_ordering(1019) 00:14:55.096 fused_ordering(1020) 00:14:55.096 fused_ordering(1021) 00:14:55.096 fused_ordering(1022) 00:14:55.096 fused_ordering(1023) 00:14:55.096 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:55.096 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:55.096 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:55.096 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:55.096 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:55.096 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:55.096 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:55.096 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:55.096 rmmod nvme_tcp 00:14:55.096 rmmod nvme_fabrics 00:14:55.096 rmmod nvme_keyring 00:14:55.096 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:55.096 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2509149 ']' 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2509149 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2509149 ']' 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2509149 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2509149 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2509149' 00:14:55.097 killing process with pid 2509149 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2509149 00:14:55.097 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2509149 00:14:55.357 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:55.357 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:55.357 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:55.357 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:55.357 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:55.357 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:55.357 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:55.358 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:55.358 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:55.358 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.358 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:55.358 10:25:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.267 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:57.267 00:14:57.267 real 0m7.653s 00:14:57.267 user 0m5.217s 00:14:57.267 sys 0m3.165s 00:14:57.267 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:57.267 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:57.267 ************************************ 00:14:57.267 END TEST nvmf_fused_ordering 00:14:57.267 ************************************ 00:14:57.267 10:25:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:57.267 10:25:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:57.267 10:25:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:57.267 10:25:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:57.267 ************************************ 00:14:57.267 START TEST nvmf_ns_masking 00:14:57.267 ************************************ 00:14:57.267 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:57.527 * Looking for test storage... 00:14:57.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:57.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.527 --rc genhtml_branch_coverage=1 00:14:57.527 --rc genhtml_function_coverage=1 00:14:57.527 --rc genhtml_legend=1 00:14:57.527 --rc geninfo_all_blocks=1 00:14:57.527 --rc geninfo_unexecuted_blocks=1 00:14:57.527 00:14:57.527 ' 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:57.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.527 --rc genhtml_branch_coverage=1 00:14:57.527 --rc genhtml_function_coverage=1 00:14:57.527 --rc genhtml_legend=1 00:14:57.527 --rc geninfo_all_blocks=1 00:14:57.527 --rc geninfo_unexecuted_blocks=1 00:14:57.527 00:14:57.527 ' 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:57.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.527 --rc genhtml_branch_coverage=1 00:14:57.527 --rc genhtml_function_coverage=1 00:14:57.527 --rc genhtml_legend=1 00:14:57.527 --rc geninfo_all_blocks=1 00:14:57.527 --rc geninfo_unexecuted_blocks=1 00:14:57.527 00:14:57.527 ' 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:57.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.527 --rc genhtml_branch_coverage=1 00:14:57.527 --rc genhtml_function_coverage=1 00:14:57.527 --rc genhtml_legend=1 00:14:57.527 --rc geninfo_all_blocks=1 00:14:57.527 --rc geninfo_unexecuted_blocks=1 00:14:57.527 00:14:57.527 ' 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:57.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c75664c2-9f1b-41c7-a16d-7dd083b31c3f 00:14:57.527 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=33c83a1a-5342-4008-996d-fa3058fddf3e 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4a1d8f53-c9cd-47c3-bd12-e67f13d039c9 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:57.528 10:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.140 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:00.140 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:00.140 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:00.140 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:00.140 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:00.140 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:00.140 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:00.140 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:00.140 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:00.140 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:00.140 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:00.141 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:00.141 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:00.141 Found net devices under 0000:09:00.0: cvl_0_0 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:00.141 Found net devices under 0000:09:00.1: cvl_0_1 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:00.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:15:00.141 00:15:00.141 --- 10.0.0.2 ping statistics --- 00:15:00.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.141 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:15:00.141 00:15:00.141 --- 10.0.0.1 ping statistics --- 00:15:00.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.141 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:00.141 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2511505 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2511505 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2511505 ']' 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.142 [2024-12-09 10:25:32.255945] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:15:00.142 [2024-12-09 10:25:32.256047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.142 [2024-12-09 10:25:32.329403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.142 [2024-12-09 10:25:32.387615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.142 [2024-12-09 10:25:32.387678] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.142 [2024-12-09 10:25:32.387706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.142 [2024-12-09 10:25:32.387717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.142 [2024-12-09 10:25:32.387726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.142 [2024-12-09 10:25:32.388333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.142 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:00.399 [2024-12-09 10:25:32.791896] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:00.399 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:00.399 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:00.399 10:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:00.656 Malloc1 00:15:00.913 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:01.171 Malloc2 00:15:01.171 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:01.429 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:01.685 10:25:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.942 [2024-12-09 10:25:34.212218] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.942 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:01.942 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4a1d8f53-c9cd-47c3-bd12-e67f13d039c9 -a 10.0.0.2 -s 4420 -i 4 00:15:02.199 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:02.199 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:02.199 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.199 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:02.199 10:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:04.094 [ 0]:0x1 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.094 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:04.352 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a9134c3d4c9647ab9e2430911eae64fe 00:15:04.352 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a9134c3d4c9647ab9e2430911eae64fe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.352 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:04.610 [ 0]:0x1 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a9134c3d4c9647ab9e2430911eae64fe 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a9134c3d4c9647ab9e2430911eae64fe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:04.610 [ 1]:0x2 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa968465665b4f70a1410efdf13b748a 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa968465665b4f70a1410efdf13b748a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:04.610 10:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:04.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.868 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.126 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:05.384 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:05.384 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4a1d8f53-c9cd-47c3-bd12-e67f13d039c9 -a 10.0.0.2 -s 4420 -i 4 00:15:05.642 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:05.642 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:05.642 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:05.642 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:15:05.642 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:15:05.642 10:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:07.541 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:07.541 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:07.541 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:07.541 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:07.541 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:07.541 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:07.541 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:07.541 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:07.541 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:07.542 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:07.800 [ 0]:0x2 00:15:07.800 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:07.800 10:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:07.800 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa968465665b4f70a1410efdf13b748a 00:15:07.800 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa968465665b4f70a1410efdf13b748a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:07.800 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.058 [ 0]:0x1 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a9134c3d4c9647ab9e2430911eae64fe 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a9134c3d4c9647ab9e2430911eae64fe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:08.058 [ 1]:0x2 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa968465665b4f70a1410efdf13b748a 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa968465665b4f70a1410efdf13b748a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.058 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:08.316 [ 0]:0x2 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:08.316 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:08.575 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa968465665b4f70a1410efdf13b748a 00:15:08.575 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa968465665b4f70a1410efdf13b748a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:08.575 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:08.575 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.575 10:25:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:08.833 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:08.833 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4a1d8f53-c9cd-47c3-bd12-e67f13d039c9 -a 10.0.0.2 -s 4420 -i 4 00:15:09.091 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:09.091 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:09.091 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:09.091 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:09.091 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:09.091 10:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:10.988 [ 0]:0x1 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a9134c3d4c9647ab9e2430911eae64fe 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a9134c3d4c9647ab9e2430911eae64fe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:10.988 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.245 [ 1]:0x2 00:15:11.245 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.245 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.245 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa968465665b4f70a1410efdf13b748a 00:15:11.245 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa968465665b4f70a1410efdf13b748a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.245 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:11.503 [ 0]:0x2 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa968465665b4f70a1410efdf13b748a 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa968465665b4f70a1410efdf13b748a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:11.503 10:25:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:11.760 [2024-12-09 10:25:44.174012] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:11.760 request: 00:15:11.760 { 00:15:11.760 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:11.760 "nsid": 2, 00:15:11.760 "host": "nqn.2016-06.io.spdk:host1", 00:15:11.760 "method": "nvmf_ns_remove_host", 00:15:11.760 "req_id": 1 00:15:11.760 } 00:15:11.760 Got JSON-RPC error response 00:15:11.760 response: 00:15:11.760 { 00:15:11.760 "code": -32602, 00:15:11.760 "message": "Invalid parameters" 00:15:11.760 } 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:11.760 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:12.018 [ 0]:0x2 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa968465665b4f70a1410efdf13b748a 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa968465665b4f70a1410efdf13b748a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:12.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2513005 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2513005 /var/tmp/host.sock 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2513005 ']' 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:12.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.018 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:12.018 [2024-12-09 10:25:44.391866] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:15:12.018 [2024-12-09 10:25:44.391948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2513005 ] 00:15:12.018 [2024-12-09 10:25:44.459312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.276 [2024-12-09 10:25:44.517544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.533 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.533 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:12.533 10:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:12.790 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:13.047 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c75664c2-9f1b-41c7-a16d-7dd083b31c3f 00:15:13.047 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:13.047 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C75664C29F1B41C7A16D7DD083B31C3F -i 00:15:13.303 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 33c83a1a-5342-4008-996d-fa3058fddf3e 00:15:13.303 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:13.303 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 33C83A1A53424008996DFA3058FDDF3E -i 00:15:13.560 10:25:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:13.816 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:14.073 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:14.073 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:14.638 nvme0n1 00:15:14.638 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:14.638 10:25:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:14.896 nvme1n2 00:15:14.896 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:14.896 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:14.896 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:14.896 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:14.896 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:15.154 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:15.154 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:15.154 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:15.154 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:15.412 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c75664c2-9f1b-41c7-a16d-7dd083b31c3f == \c\7\5\6\6\4\c\2\-\9\f\1\b\-\4\1\c\7\-\a\1\6\d\-\7\d\d\0\8\3\b\3\1\c\3\f ]] 00:15:15.412 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:15.412 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:15.412 10:25:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:15.671 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 33c83a1a-5342-4008-996d-fa3058fddf3e == \3\3\c\8\3\a\1\a\-\5\3\4\2\-\4\0\0\8\-\9\9\6\d\-\f\a\3\0\5\8\f\d\d\f\3\e ]] 00:15:15.671 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:15.929 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c75664c2-9f1b-41c7-a16d-7dd083b31c3f 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C75664C29F1B41C7A16D7DD083B31C3F 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C75664C29F1B41C7A16D7DD083B31C3F 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:16.187 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C75664C29F1B41C7A16D7DD083B31C3F 00:15:16.446 [2024-12-09 10:25:48.823683] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:16.446 [2024-12-09 10:25:48.823723] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:16.446 [2024-12-09 10:25:48.823752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:16.446 request: 00:15:16.446 { 00:15:16.446 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:16.446 "namespace": { 00:15:16.446 "bdev_name": "invalid", 00:15:16.446 "nsid": 1, 00:15:16.446 "nguid": "C75664C29F1B41C7A16D7DD083B31C3F", 00:15:16.446 "no_auto_visible": false, 00:15:16.446 "hide_metadata": false 00:15:16.446 }, 00:15:16.446 "method": "nvmf_subsystem_add_ns", 00:15:16.446 "req_id": 1 00:15:16.446 } 00:15:16.446 Got JSON-RPC error response 00:15:16.446 response: 00:15:16.446 { 00:15:16.446 "code": -32602, 00:15:16.446 "message": "Invalid parameters" 00:15:16.446 } 00:15:16.446 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:15:16.446 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:16.446 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:16.446 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:16.446 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c75664c2-9f1b-41c7-a16d-7dd083b31c3f 00:15:16.446 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:16.446 10:25:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C75664C29F1B41C7A16D7DD083B31C3F -i 00:15:16.705 10:25:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:19.234 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:19.234 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:19.234 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:19.234 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:19.234 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2513005 00:15:19.234 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2513005 ']' 00:15:19.234 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2513005 00:15:19.234 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:19.234 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.235 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2513005 00:15:19.235 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:19.235 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:19.235 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2513005' 00:15:19.235 killing process with pid 2513005 00:15:19.235 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2513005 00:15:19.235 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2513005 00:15:19.800 10:25:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:19.800 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:19.800 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:19.800 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:19.800 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:19.800 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:19.800 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:19.800 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:19.800 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:19.800 rmmod nvme_tcp 00:15:19.800 rmmod nvme_fabrics 00:15:20.058 rmmod nvme_keyring 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2511505 ']' 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2511505 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2511505 ']' 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2511505 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511505 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511505' 00:15:20.058 killing process with pid 2511505 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2511505 00:15:20.058 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2511505 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:20.316 10:25:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:22.846 00:15:22.846 real 0m24.998s 00:15:22.846 user 0m36.001s 00:15:22.846 sys 0m4.785s 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:22.846 ************************************ 00:15:22.846 END TEST nvmf_ns_masking 00:15:22.846 ************************************ 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:22.846 ************************************ 00:15:22.846 START TEST nvmf_nvme_cli 00:15:22.846 ************************************ 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:22.846 * Looking for test storage... 00:15:22.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:22.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.846 --rc genhtml_branch_coverage=1 00:15:22.846 --rc genhtml_function_coverage=1 00:15:22.846 --rc genhtml_legend=1 00:15:22.846 --rc geninfo_all_blocks=1 00:15:22.846 --rc geninfo_unexecuted_blocks=1 00:15:22.846 00:15:22.846 ' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:22.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.846 --rc genhtml_branch_coverage=1 00:15:22.846 --rc genhtml_function_coverage=1 00:15:22.846 --rc genhtml_legend=1 00:15:22.846 --rc geninfo_all_blocks=1 00:15:22.846 --rc geninfo_unexecuted_blocks=1 00:15:22.846 00:15:22.846 ' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:22.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.846 --rc genhtml_branch_coverage=1 00:15:22.846 --rc genhtml_function_coverage=1 00:15:22.846 --rc genhtml_legend=1 00:15:22.846 --rc geninfo_all_blocks=1 00:15:22.846 --rc geninfo_unexecuted_blocks=1 00:15:22.846 00:15:22.846 ' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:22.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.846 --rc genhtml_branch_coverage=1 00:15:22.846 --rc genhtml_function_coverage=1 00:15:22.846 --rc genhtml_legend=1 00:15:22.846 --rc geninfo_all_blocks=1 00:15:22.846 --rc geninfo_unexecuted_blocks=1 00:15:22.846 00:15:22.846 ' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:22.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:22.846 10:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:24.746 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:24.746 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:24.746 Found net devices under 0000:09:00.0: cvl_0_0 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:24.746 Found net devices under 0000:09:00.1: cvl_0_1 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:24.746 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:24.747 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:24.747 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.747 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:24.747 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:24.747 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:24.747 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:24.747 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:24.747 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:24.747 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:24.747 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:24.747 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:25.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:15:25.005 00:15:25.005 --- 10.0.0.2 ping statistics --- 00:15:25.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.005 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:15:25.005 00:15:25.005 --- 10.0.0.1 ping statistics --- 00:15:25.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.005 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2516035 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2516035 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2516035 ']' 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.005 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.005 [2024-12-09 10:25:57.286261] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:15:25.005 [2024-12-09 10:25:57.286364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.005 [2024-12-09 10:25:57.358391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.005 [2024-12-09 10:25:57.419722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.005 [2024-12-09 10:25:57.419780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.005 [2024-12-09 10:25:57.419808] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.005 [2024-12-09 10:25:57.419819] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.005 [2024-12-09 10:25:57.419829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.005 [2024-12-09 10:25:57.421599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.005 [2024-12-09 10:25:57.421664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.005 [2024-12-09 10:25:57.421731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.006 [2024-12-09 10:25:57.421734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.264 [2024-12-09 10:25:57.579035] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.264 Malloc0 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.264 Malloc1 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.264 [2024-12-09 10:25:57.685257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:25.264 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.265 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:15:25.523 00:15:25.523 Discovery Log Number of Records 2, Generation counter 2 00:15:25.523 =====Discovery Log Entry 0====== 00:15:25.523 trtype: tcp 00:15:25.523 adrfam: ipv4 00:15:25.523 subtype: current discovery subsystem 00:15:25.523 treq: not required 00:15:25.523 portid: 0 00:15:25.523 trsvcid: 4420 00:15:25.523 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:25.523 traddr: 10.0.0.2 00:15:25.523 eflags: explicit discovery connections, duplicate discovery information 00:15:25.523 sectype: none 00:15:25.523 =====Discovery Log Entry 1====== 00:15:25.523 trtype: tcp 00:15:25.523 adrfam: ipv4 00:15:25.523 subtype: nvme subsystem 00:15:25.523 treq: not required 00:15:25.523 portid: 0 00:15:25.523 trsvcid: 4420 00:15:25.523 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:25.523 traddr: 10.0.0.2 00:15:25.523 eflags: none 00:15:25.523 sectype: none 00:15:25.523 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:25.523 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:25.523 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:25.523 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:25.523 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:25.523 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:25.523 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:25.523 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:25.523 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:25.523 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:25.523 10:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:26.459 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:26.459 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:15:26.459 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:26.459 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:15:26.459 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:15:26.459 10:25:58 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:28.420 /dev/nvme0n2 ]] 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:28.420 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:28.679 10:26:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:28.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:28.937 rmmod nvme_tcp 00:15:28.937 rmmod nvme_fabrics 00:15:28.937 rmmod nvme_keyring 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2516035 ']' 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2516035 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2516035 ']' 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2516035 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2516035 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2516035' 00:15:28.937 killing process with pid 2516035 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2516035 00:15:28.937 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2516035 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:29.196 10:26:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:31.733 00:15:31.733 real 0m8.916s 00:15:31.733 user 0m17.086s 00:15:31.733 sys 0m2.421s 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:31.733 ************************************ 00:15:31.733 END TEST nvmf_nvme_cli 00:15:31.733 ************************************ 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.733 ************************************ 00:15:31.733 START TEST nvmf_vfio_user 00:15:31.733 ************************************ 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:31.733 * Looking for test storage... 00:15:31.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:31.733 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:31.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.734 --rc genhtml_branch_coverage=1 00:15:31.734 --rc genhtml_function_coverage=1 00:15:31.734 --rc genhtml_legend=1 00:15:31.734 --rc geninfo_all_blocks=1 00:15:31.734 --rc geninfo_unexecuted_blocks=1 00:15:31.734 00:15:31.734 ' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:31.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.734 --rc genhtml_branch_coverage=1 00:15:31.734 --rc genhtml_function_coverage=1 00:15:31.734 --rc genhtml_legend=1 00:15:31.734 --rc geninfo_all_blocks=1 00:15:31.734 --rc geninfo_unexecuted_blocks=1 00:15:31.734 00:15:31.734 ' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:31.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.734 --rc genhtml_branch_coverage=1 00:15:31.734 --rc genhtml_function_coverage=1 00:15:31.734 --rc genhtml_legend=1 00:15:31.734 --rc geninfo_all_blocks=1 00:15:31.734 --rc geninfo_unexecuted_blocks=1 00:15:31.734 00:15:31.734 ' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:31.734 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:31.734 --rc genhtml_branch_coverage=1 00:15:31.734 --rc genhtml_function_coverage=1 00:15:31.734 --rc genhtml_legend=1 00:15:31.734 --rc geninfo_all_blocks=1 00:15:31.734 --rc geninfo_unexecuted_blocks=1 00:15:31.734 00:15:31.734 ' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:31.734 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2516866 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2516866' 00:15:31.734 Process pid: 2516866 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2516866 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2516866 ']' 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.734 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.735 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.735 10:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:31.735 [2024-12-09 10:26:03.892821] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:15:31.735 [2024-12-09 10:26:03.892902] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.735 [2024-12-09 10:26:03.968135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:31.735 [2024-12-09 10:26:04.027704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:31.735 [2024-12-09 10:26:04.027768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:31.735 [2024-12-09 10:26:04.027797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:31.735 [2024-12-09 10:26:04.027808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:31.735 [2024-12-09 10:26:04.027818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:31.735 [2024-12-09 10:26:04.029374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:31.735 [2024-12-09 10:26:04.029443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:31.735 [2024-12-09 10:26:04.029504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:31.735 [2024-12-09 10:26:04.029508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.735 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:31.735 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:31.735 10:26:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:33.105 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:33.105 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:33.105 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:33.105 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:33.105 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:33.105 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:33.362 Malloc1 00:15:33.362 10:26:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:33.619 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:33.876 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:34.133 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:34.133 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:34.133 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:34.697 Malloc2 00:15:34.697 10:26:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:34.697 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:35.262 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:35.262 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:35.262 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:35.262 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:35.262 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:35.262 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:35.262 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:35.262 [2024-12-09 10:26:07.697895] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:15:35.262 [2024-12-09 10:26:07.697939] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2517394 ] 00:15:35.521 [2024-12-09 10:26:07.748406] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:35.521 [2024-12-09 10:26:07.757584] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:35.521 [2024-12-09 10:26:07.757617] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6bd5ac5000 00:15:35.521 [2024-12-09 10:26:07.758571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.521 [2024-12-09 10:26:07.759566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.521 [2024-12-09 10:26:07.760577] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.521 [2024-12-09 10:26:07.761582] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:35.521 [2024-12-09 10:26:07.762584] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:35.521 [2024-12-09 10:26:07.763586] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.521 [2024-12-09 10:26:07.764595] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:35.521 [2024-12-09 10:26:07.765597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:35.521 [2024-12-09 10:26:07.766605] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:35.521 [2024-12-09 10:26:07.766625] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6bd5aba000 00:15:35.521 [2024-12-09 10:26:07.767741] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:35.521 [2024-12-09 10:26:07.783376] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:35.521 [2024-12-09 10:26:07.783418] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:35.521 [2024-12-09 10:26:07.788731] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:35.521 [2024-12-09 10:26:07.788784] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:35.521 [2024-12-09 10:26:07.788870] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:35.521 [2024-12-09 10:26:07.788895] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:35.521 [2024-12-09 10:26:07.788905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:35.521 [2024-12-09 10:26:07.789719] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:35.521 [2024-12-09 10:26:07.789754] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:35.521 [2024-12-09 10:26:07.789767] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:35.521 [2024-12-09 10:26:07.790725] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:35.521 [2024-12-09 10:26:07.790745] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:35.521 [2024-12-09 10:26:07.790758] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:35.521 [2024-12-09 10:26:07.791732] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:35.521 [2024-12-09 10:26:07.791753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:35.521 [2024-12-09 10:26:07.792742] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:35.521 [2024-12-09 10:26:07.792770] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:35.521 [2024-12-09 10:26:07.792779] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:35.521 [2024-12-09 10:26:07.792790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:35.521 [2024-12-09 10:26:07.792904] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:35.521 [2024-12-09 10:26:07.792912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:35.521 [2024-12-09 10:26:07.792921] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:35.521 [2024-12-09 10:26:07.793748] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:35.521 [2024-12-09 10:26:07.794748] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:35.521 [2024-12-09 10:26:07.795757] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:35.521 [2024-12-09 10:26:07.796758] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:35.521 [2024-12-09 10:26:07.796874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:35.521 [2024-12-09 10:26:07.797775] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:35.521 [2024-12-09 10:26:07.797793] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:35.521 [2024-12-09 10:26:07.797802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:35.521 [2024-12-09 10:26:07.797825] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:35.521 [2024-12-09 10:26:07.797842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:35.521 [2024-12-09 10:26:07.797871] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:35.521 [2024-12-09 10:26:07.797881] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.521 [2024-12-09 10:26:07.797887] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.521 [2024-12-09 10:26:07.797903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.521 [2024-12-09 10:26:07.797972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:35.521 [2024-12-09 10:26:07.797992] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:35.521 [2024-12-09 10:26:07.798000] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:35.522 [2024-12-09 10:26:07.798007] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:35.522 [2024-12-09 10:26:07.798015] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:35.522 [2024-12-09 10:26:07.798022] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:35.522 [2024-12-09 10:26:07.798029] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:35.522 [2024-12-09 10:26:07.798036] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:35.522 [2024-12-09 10:26:07.798100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.522 [2024-12-09 10:26:07.798112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.522 [2024-12-09 10:26:07.798147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.522 [2024-12-09 10:26:07.798161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:35.522 [2024-12-09 10:26:07.798170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:35.522 [2024-12-09 10:26:07.798224] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:35.522 [2024-12-09 10:26:07.798232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:35.522 [2024-12-09 10:26:07.798351] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798381] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:35.522 [2024-12-09 10:26:07.798389] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:35.522 [2024-12-09 10:26:07.798395] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.522 [2024-12-09 10:26:07.798405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:35.522 [2024-12-09 10:26:07.798443] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:35.522 [2024-12-09 10:26:07.798477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798504] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:35.522 [2024-12-09 10:26:07.798512] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.522 [2024-12-09 10:26:07.798518] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.522 [2024-12-09 10:26:07.798527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:35.522 [2024-12-09 10:26:07.798582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798608] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:35.522 [2024-12-09 10:26:07.798616] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.522 [2024-12-09 10:26:07.798621] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.522 [2024-12-09 10:26:07.798630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:35.522 [2024-12-09 10:26:07.798654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798665] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798678] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798715] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:35.522 [2024-12-09 10:26:07.798722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:35.522 [2024-12-09 10:26:07.798730] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:35.522 [2024-12-09 10:26:07.798753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:35.522 [2024-12-09 10:26:07.798791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:35.522 [2024-12-09 10:26:07.798819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:35.522 [2024-12-09 10:26:07.798845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:35.522 [2024-12-09 10:26:07.798876] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:35.522 [2024-12-09 10:26:07.798885] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:35.522 [2024-12-09 10:26:07.798891] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:35.522 [2024-12-09 10:26:07.798897] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:35.522 [2024-12-09 10:26:07.798902] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:35.522 [2024-12-09 10:26:07.798911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:35.522 [2024-12-09 10:26:07.798923] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:35.522 [2024-12-09 10:26:07.798930] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:35.522 [2024-12-09 10:26:07.798936] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.522 [2024-12-09 10:26:07.798944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798955] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:35.522 [2024-12-09 10:26:07.798962] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:35.522 [2024-12-09 10:26:07.798968] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.522 [2024-12-09 10:26:07.798976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.798988] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:35.522 [2024-12-09 10:26:07.798995] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:35.522 [2024-12-09 10:26:07.799001] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:35.522 [2024-12-09 10:26:07.799009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:35.522 [2024-12-09 10:26:07.799020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:35.522 [2024-12-09 10:26:07.799041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:35.523 [2024-12-09 10:26:07.799059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:35.523 [2024-12-09 10:26:07.799070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:35.523 ===================================================== 00:15:35.523 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:35.523 ===================================================== 00:15:35.523 Controller Capabilities/Features 00:15:35.523 ================================ 00:15:35.523 Vendor ID: 4e58 00:15:35.523 Subsystem Vendor ID: 4e58 00:15:35.523 Serial Number: SPDK1 00:15:35.523 Model Number: SPDK bdev Controller 00:15:35.523 Firmware Version: 25.01 00:15:35.523 Recommended Arb Burst: 6 00:15:35.523 IEEE OUI Identifier: 8d 6b 50 00:15:35.523 Multi-path I/O 00:15:35.523 May have multiple subsystem ports: Yes 00:15:35.523 May have multiple controllers: Yes 00:15:35.523 Associated with SR-IOV VF: No 00:15:35.523 Max Data Transfer Size: 131072 00:15:35.523 Max Number of Namespaces: 32 00:15:35.523 Max Number of I/O Queues: 127 00:15:35.523 NVMe Specification Version (VS): 1.3 00:15:35.523 NVMe Specification Version (Identify): 1.3 00:15:35.523 Maximum Queue Entries: 256 00:15:35.523 Contiguous Queues Required: Yes 00:15:35.523 Arbitration Mechanisms Supported 00:15:35.523 Weighted Round Robin: Not Supported 00:15:35.523 Vendor Specific: Not Supported 00:15:35.523 Reset Timeout: 15000 ms 00:15:35.523 Doorbell Stride: 4 bytes 00:15:35.523 NVM Subsystem Reset: Not Supported 00:15:35.523 Command Sets Supported 00:15:35.523 NVM Command Set: Supported 00:15:35.523 Boot Partition: Not Supported 00:15:35.523 Memory Page Size Minimum: 4096 bytes 00:15:35.523 Memory Page Size Maximum: 4096 bytes 00:15:35.523 Persistent Memory Region: Not Supported 00:15:35.523 Optional Asynchronous Events Supported 00:15:35.523 Namespace Attribute Notices: Supported 00:15:35.523 Firmware Activation Notices: Not Supported 00:15:35.523 ANA Change Notices: Not Supported 00:15:35.523 PLE Aggregate Log Change Notices: Not Supported 00:15:35.523 LBA Status Info Alert Notices: Not Supported 00:15:35.523 EGE Aggregate Log Change Notices: Not Supported 00:15:35.523 Normal NVM Subsystem Shutdown event: Not Supported 00:15:35.523 Zone Descriptor Change Notices: Not Supported 00:15:35.523 Discovery Log Change Notices: Not Supported 00:15:35.523 Controller Attributes 00:15:35.523 128-bit Host Identifier: Supported 00:15:35.523 Non-Operational Permissive Mode: Not Supported 00:15:35.523 NVM Sets: Not Supported 00:15:35.523 Read Recovery Levels: Not Supported 00:15:35.523 Endurance Groups: Not Supported 00:15:35.523 Predictable Latency Mode: Not Supported 00:15:35.523 Traffic Based Keep ALive: Not Supported 00:15:35.523 Namespace Granularity: Not Supported 00:15:35.523 SQ Associations: Not Supported 00:15:35.523 UUID List: Not Supported 00:15:35.523 Multi-Domain Subsystem: Not Supported 00:15:35.523 Fixed Capacity Management: Not Supported 00:15:35.523 Variable Capacity Management: Not Supported 00:15:35.523 Delete Endurance Group: Not Supported 00:15:35.523 Delete NVM Set: Not Supported 00:15:35.523 Extended LBA Formats Supported: Not Supported 00:15:35.523 Flexible Data Placement Supported: Not Supported 00:15:35.523 00:15:35.523 Controller Memory Buffer Support 00:15:35.523 ================================ 00:15:35.523 Supported: No 00:15:35.523 00:15:35.523 Persistent Memory Region Support 00:15:35.523 ================================ 00:15:35.523 Supported: No 00:15:35.523 00:15:35.523 Admin Command Set Attributes 00:15:35.523 ============================ 00:15:35.523 Security Send/Receive: Not Supported 00:15:35.523 Format NVM: Not Supported 00:15:35.523 Firmware Activate/Download: Not Supported 00:15:35.523 Namespace Management: Not Supported 00:15:35.523 Device Self-Test: Not Supported 00:15:35.523 Directives: Not Supported 00:15:35.523 NVMe-MI: Not Supported 00:15:35.523 Virtualization Management: Not Supported 00:15:35.523 Doorbell Buffer Config: Not Supported 00:15:35.523 Get LBA Status Capability: Not Supported 00:15:35.523 Command & Feature Lockdown Capability: Not Supported 00:15:35.523 Abort Command Limit: 4 00:15:35.523 Async Event Request Limit: 4 00:15:35.523 Number of Firmware Slots: N/A 00:15:35.523 Firmware Slot 1 Read-Only: N/A 00:15:35.523 Firmware Activation Without Reset: N/A 00:15:35.523 Multiple Update Detection Support: N/A 00:15:35.523 Firmware Update Granularity: No Information Provided 00:15:35.523 Per-Namespace SMART Log: No 00:15:35.523 Asymmetric Namespace Access Log Page: Not Supported 00:15:35.523 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:35.523 Command Effects Log Page: Supported 00:15:35.523 Get Log Page Extended Data: Supported 00:15:35.523 Telemetry Log Pages: Not Supported 00:15:35.523 Persistent Event Log Pages: Not Supported 00:15:35.523 Supported Log Pages Log Page: May Support 00:15:35.523 Commands Supported & Effects Log Page: Not Supported 00:15:35.523 Feature Identifiers & Effects Log Page:May Support 00:15:35.523 NVMe-MI Commands & Effects Log Page: May Support 00:15:35.523 Data Area 4 for Telemetry Log: Not Supported 00:15:35.523 Error Log Page Entries Supported: 128 00:15:35.523 Keep Alive: Supported 00:15:35.523 Keep Alive Granularity: 10000 ms 00:15:35.523 00:15:35.523 NVM Command Set Attributes 00:15:35.523 ========================== 00:15:35.523 Submission Queue Entry Size 00:15:35.523 Max: 64 00:15:35.523 Min: 64 00:15:35.523 Completion Queue Entry Size 00:15:35.523 Max: 16 00:15:35.523 Min: 16 00:15:35.523 Number of Namespaces: 32 00:15:35.523 Compare Command: Supported 00:15:35.523 Write Uncorrectable Command: Not Supported 00:15:35.523 Dataset Management Command: Supported 00:15:35.523 Write Zeroes Command: Supported 00:15:35.523 Set Features Save Field: Not Supported 00:15:35.523 Reservations: Not Supported 00:15:35.523 Timestamp: Not Supported 00:15:35.523 Copy: Supported 00:15:35.523 Volatile Write Cache: Present 00:15:35.523 Atomic Write Unit (Normal): 1 00:15:35.523 Atomic Write Unit (PFail): 1 00:15:35.523 Atomic Compare & Write Unit: 1 00:15:35.523 Fused Compare & Write: Supported 00:15:35.523 Scatter-Gather List 00:15:35.523 SGL Command Set: Supported (Dword aligned) 00:15:35.523 SGL Keyed: Not Supported 00:15:35.523 SGL Bit Bucket Descriptor: Not Supported 00:15:35.523 SGL Metadata Pointer: Not Supported 00:15:35.523 Oversized SGL: Not Supported 00:15:35.523 SGL Metadata Address: Not Supported 00:15:35.523 SGL Offset: Not Supported 00:15:35.523 Transport SGL Data Block: Not Supported 00:15:35.523 Replay Protected Memory Block: Not Supported 00:15:35.523 00:15:35.523 Firmware Slot Information 00:15:35.523 ========================= 00:15:35.523 Active slot: 1 00:15:35.523 Slot 1 Firmware Revision: 25.01 00:15:35.523 00:15:35.523 00:15:35.523 Commands Supported and Effects 00:15:35.523 ============================== 00:15:35.523 Admin Commands 00:15:35.523 -------------- 00:15:35.523 Get Log Page (02h): Supported 00:15:35.523 Identify (06h): Supported 00:15:35.523 Abort (08h): Supported 00:15:35.523 Set Features (09h): Supported 00:15:35.523 Get Features (0Ah): Supported 00:15:35.523 Asynchronous Event Request (0Ch): Supported 00:15:35.523 Keep Alive (18h): Supported 00:15:35.523 I/O Commands 00:15:35.523 ------------ 00:15:35.523 Flush (00h): Supported LBA-Change 00:15:35.523 Write (01h): Supported LBA-Change 00:15:35.523 Read (02h): Supported 00:15:35.523 Compare (05h): Supported 00:15:35.523 Write Zeroes (08h): Supported LBA-Change 00:15:35.523 Dataset Management (09h): Supported LBA-Change 00:15:35.523 Copy (19h): Supported LBA-Change 00:15:35.523 00:15:35.523 Error Log 00:15:35.523 ========= 00:15:35.523 00:15:35.523 Arbitration 00:15:35.523 =========== 00:15:35.523 Arbitration Burst: 1 00:15:35.523 00:15:35.523 Power Management 00:15:35.523 ================ 00:15:35.523 Number of Power States: 1 00:15:35.523 Current Power State: Power State #0 00:15:35.523 Power State #0: 00:15:35.523 Max Power: 0.00 W 00:15:35.523 Non-Operational State: Operational 00:15:35.523 Entry Latency: Not Reported 00:15:35.523 Exit Latency: Not Reported 00:15:35.523 Relative Read Throughput: 0 00:15:35.523 Relative Read Latency: 0 00:15:35.523 Relative Write Throughput: 0 00:15:35.523 Relative Write Latency: 0 00:15:35.523 Idle Power: Not Reported 00:15:35.523 Active Power: Not Reported 00:15:35.523 Non-Operational Permissive Mode: Not Supported 00:15:35.523 00:15:35.523 Health Information 00:15:35.523 ================== 00:15:35.523 Critical Warnings: 00:15:35.524 Available Spare Space: OK 00:15:35.524 Temperature: OK 00:15:35.524 Device Reliability: OK 00:15:35.524 Read Only: No 00:15:35.524 Volatile Memory Backup: OK 00:15:35.524 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:35.524 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:35.524 Available Spare: 0% 00:15:35.524 Available Sp[2024-12-09 10:26:07.799211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:35.524 [2024-12-09 10:26:07.799245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:35.524 [2024-12-09 10:26:07.799288] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:35.524 [2024-12-09 10:26:07.799306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.524 [2024-12-09 10:26:07.799317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.524 [2024-12-09 10:26:07.799326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.524 [2024-12-09 10:26:07.799336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:35.524 [2024-12-09 10:26:07.799784] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:35.524 [2024-12-09 10:26:07.799805] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:35.524 [2024-12-09 10:26:07.800788] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:35.524 [2024-12-09 10:26:07.800859] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:35.524 [2024-12-09 10:26:07.800872] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:35.524 [2024-12-09 10:26:07.801799] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:35.524 [2024-12-09 10:26:07.801821] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:35.524 [2024-12-09 10:26:07.801874] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:35.524 [2024-12-09 10:26:07.807151] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:35.524 are Threshold: 0% 00:15:35.524 Life Percentage Used: 0% 00:15:35.524 Data Units Read: 0 00:15:35.524 Data Units Written: 0 00:15:35.524 Host Read Commands: 0 00:15:35.524 Host Write Commands: 0 00:15:35.524 Controller Busy Time: 0 minutes 00:15:35.524 Power Cycles: 0 00:15:35.524 Power On Hours: 0 hours 00:15:35.524 Unsafe Shutdowns: 0 00:15:35.524 Unrecoverable Media Errors: 0 00:15:35.524 Lifetime Error Log Entries: 0 00:15:35.524 Warning Temperature Time: 0 minutes 00:15:35.524 Critical Temperature Time: 0 minutes 00:15:35.524 00:15:35.524 Number of Queues 00:15:35.524 ================ 00:15:35.524 Number of I/O Submission Queues: 127 00:15:35.524 Number of I/O Completion Queues: 127 00:15:35.524 00:15:35.524 Active Namespaces 00:15:35.524 ================= 00:15:35.524 Namespace ID:1 00:15:35.524 Error Recovery Timeout: Unlimited 00:15:35.524 Command Set Identifier: NVM (00h) 00:15:35.524 Deallocate: Supported 00:15:35.524 Deallocated/Unwritten Error: Not Supported 00:15:35.524 Deallocated Read Value: Unknown 00:15:35.524 Deallocate in Write Zeroes: Not Supported 00:15:35.524 Deallocated Guard Field: 0xFFFF 00:15:35.524 Flush: Supported 00:15:35.524 Reservation: Supported 00:15:35.524 Namespace Sharing Capabilities: Multiple Controllers 00:15:35.524 Size (in LBAs): 131072 (0GiB) 00:15:35.524 Capacity (in LBAs): 131072 (0GiB) 00:15:35.524 Utilization (in LBAs): 131072 (0GiB) 00:15:35.524 NGUID: A78B944BE9C140B8A2BA9C0E1AA1D2B3 00:15:35.524 UUID: a78b944b-e9c1-40b8-a2ba-9c0e1aa1d2b3 00:15:35.524 Thin Provisioning: Not Supported 00:15:35.524 Per-NS Atomic Units: Yes 00:15:35.524 Atomic Boundary Size (Normal): 0 00:15:35.524 Atomic Boundary Size (PFail): 0 00:15:35.524 Atomic Boundary Offset: 0 00:15:35.524 Maximum Single Source Range Length: 65535 00:15:35.524 Maximum Copy Length: 65535 00:15:35.524 Maximum Source Range Count: 1 00:15:35.524 NGUID/EUI64 Never Reused: No 00:15:35.524 Namespace Write Protected: No 00:15:35.524 Number of LBA Formats: 1 00:15:35.524 Current LBA Format: LBA Format #00 00:15:35.524 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:35.524 00:15:35.524 10:26:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:35.781 [2024-12-09 10:26:08.145364] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:41.043 Initializing NVMe Controllers 00:15:41.043 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:41.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:41.043 Initialization complete. Launching workers. 00:15:41.043 ======================================================== 00:15:41.043 Latency(us) 00:15:41.044 Device Information : IOPS MiB/s Average min max 00:15:41.044 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 30157.80 117.80 4243.57 1219.07 9343.66 00:15:41.044 ======================================================== 00:15:41.044 Total : 30157.80 117.80 4243.57 1219.07 9343.66 00:15:41.044 00:15:41.044 [2024-12-09 10:26:13.171255] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:41.044 10:26:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:41.300 [2024-12-09 10:26:13.512681] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:46.556 Initializing NVMe Controllers 00:15:46.556 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:46.556 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:46.556 Initialization complete. Launching workers. 00:15:46.556 ======================================================== 00:15:46.556 Latency(us) 00:15:46.556 Device Information : IOPS MiB/s Average min max 00:15:46.556 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7997.04 4973.16 15972.91 00:15:46.556 ======================================================== 00:15:46.556 Total : 16025.60 62.60 7997.04 4973.16 15972.91 00:15:46.556 00:15:46.556 [2024-12-09 10:26:18.548977] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:46.556 10:26:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:46.556 [2024-12-09 10:26:18.862373] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:51.818 [2024-12-09 10:26:23.935511] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:51.818 Initializing NVMe Controllers 00:15:51.818 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:51.818 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:51.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:51.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:51.818 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:51.818 Initialization complete. Launching workers. 00:15:51.818 Starting thread on core 2 00:15:51.818 Starting thread on core 3 00:15:51.818 Starting thread on core 1 00:15:51.818 10:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:52.076 [2024-12-09 10:26:24.335580] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:55.357 [2024-12-09 10:26:27.409976] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:55.357 Initializing NVMe Controllers 00:15:55.357 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.357 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.357 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:55.357 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:55.357 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:55.357 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:55.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:55.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:55.357 Initialization complete. Launching workers. 00:15:55.357 Starting thread on core 1 with urgent priority queue 00:15:55.357 Starting thread on core 2 with urgent priority queue 00:15:55.357 Starting thread on core 3 with urgent priority queue 00:15:55.357 Starting thread on core 0 with urgent priority queue 00:15:55.357 SPDK bdev Controller (SPDK1 ) core 0: 6081.00 IO/s 16.44 secs/100000 ios 00:15:55.357 SPDK bdev Controller (SPDK1 ) core 1: 6309.67 IO/s 15.85 secs/100000 ios 00:15:55.357 SPDK bdev Controller (SPDK1 ) core 2: 5391.00 IO/s 18.55 secs/100000 ios 00:15:55.357 SPDK bdev Controller (SPDK1 ) core 3: 5694.00 IO/s 17.56 secs/100000 ios 00:15:55.357 ======================================================== 00:15:55.357 00:15:55.357 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:55.615 [2024-12-09 10:26:27.805701] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:55.615 Initializing NVMe Controllers 00:15:55.615 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.615 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:55.615 Namespace ID: 1 size: 0GB 00:15:55.615 Initialization complete. 00:15:55.615 INFO: using host memory buffer for IO 00:15:55.615 Hello world! 00:15:55.615 [2024-12-09 10:26:27.843340] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:55.615 10:26:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:55.872 [2024-12-09 10:26:28.231562] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:57.245 Initializing NVMe Controllers 00:15:57.245 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:57.245 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:57.246 Initialization complete. Launching workers. 00:15:57.246 submit (in ns) avg, min, max = 7611.7, 3575.6, 4016330.0 00:15:57.246 complete (in ns) avg, min, max = 27585.2, 2063.3, 4083453.3 00:15:57.246 00:15:57.246 Submit histogram 00:15:57.246 ================ 00:15:57.246 Range in us Cumulative Count 00:15:57.246 3.556 - 3.579: 0.0329% ( 4) 00:15:57.246 3.579 - 3.603: 1.0448% ( 123) 00:15:57.246 3.603 - 3.627: 5.0679% ( 489) 00:15:57.246 3.627 - 3.650: 15.5245% ( 1271) 00:15:57.246 3.650 - 3.674: 24.1464% ( 1048) 00:15:57.246 3.674 - 3.698: 32.3241% ( 994) 00:15:57.246 3.698 - 3.721: 40.4854% ( 992) 00:15:57.246 3.721 - 3.745: 47.4702% ( 849) 00:15:57.246 3.745 - 3.769: 53.5253% ( 736) 00:15:57.246 3.769 - 3.793: 57.6224% ( 498) 00:15:57.246 3.793 - 3.816: 60.8885% ( 397) 00:15:57.246 3.816 - 3.840: 64.0230% ( 381) 00:15:57.246 3.840 - 3.864: 67.4208% ( 413) 00:15:57.246 3.864 - 3.887: 71.4274% ( 487) 00:15:57.246 3.887 - 3.911: 76.4295% ( 608) 00:15:57.246 3.911 - 3.935: 80.9214% ( 546) 00:15:57.246 3.935 - 3.959: 84.4755% ( 432) 00:15:57.246 3.959 - 3.982: 86.9107% ( 296) 00:15:57.246 3.982 - 4.006: 88.6631% ( 213) 00:15:57.246 4.006 - 4.030: 89.9877% ( 161) 00:15:57.246 4.030 - 4.053: 90.9091% ( 112) 00:15:57.246 4.053 - 4.077: 91.8223% ( 111) 00:15:57.246 4.077 - 4.101: 92.5710% ( 91) 00:15:57.246 4.101 - 4.124: 93.4183% ( 103) 00:15:57.246 4.124 - 4.148: 94.2904% ( 106) 00:15:57.246 4.148 - 4.172: 94.9404% ( 79) 00:15:57.246 4.172 - 4.196: 95.4340% ( 60) 00:15:57.246 4.196 - 4.219: 95.7713% ( 41) 00:15:57.246 4.219 - 4.243: 96.0263% ( 31) 00:15:57.246 4.243 - 4.267: 96.2731% ( 30) 00:15:57.246 4.267 - 4.290: 96.4706% ( 24) 00:15:57.246 4.290 - 4.314: 96.5858% ( 14) 00:15:57.246 4.314 - 4.338: 96.6845% ( 12) 00:15:57.246 4.338 - 4.361: 96.7914% ( 13) 00:15:57.246 4.361 - 4.385: 96.8573% ( 8) 00:15:57.246 4.385 - 4.409: 96.9313% ( 9) 00:15:57.246 4.409 - 4.433: 97.0136% ( 10) 00:15:57.246 4.433 - 4.456: 97.0876% ( 9) 00:15:57.246 4.456 - 4.480: 97.1699% ( 10) 00:15:57.246 4.480 - 4.504: 97.2028% ( 4) 00:15:57.246 4.504 - 4.527: 97.2357% ( 4) 00:15:57.246 4.527 - 4.551: 97.2439% ( 1) 00:15:57.246 4.551 - 4.575: 97.2686% ( 3) 00:15:57.246 4.575 - 4.599: 97.2768% ( 1) 00:15:57.246 4.670 - 4.693: 97.2933% ( 2) 00:15:57.246 4.764 - 4.788: 97.3180% ( 3) 00:15:57.246 4.788 - 4.812: 97.3427% ( 3) 00:15:57.246 4.812 - 4.836: 97.3756% ( 4) 00:15:57.246 4.836 - 4.859: 97.4414% ( 8) 00:15:57.246 4.859 - 4.883: 97.5319% ( 11) 00:15:57.246 4.883 - 4.907: 97.5648% ( 4) 00:15:57.246 4.907 - 4.930: 97.6224% ( 7) 00:15:57.246 4.930 - 4.954: 97.6717% ( 6) 00:15:57.246 4.954 - 4.978: 97.7046% ( 4) 00:15:57.246 4.978 - 5.001: 97.7951% ( 11) 00:15:57.246 5.001 - 5.025: 97.8281% ( 4) 00:15:57.246 5.049 - 5.073: 97.8527% ( 3) 00:15:57.246 5.073 - 5.096: 97.9350% ( 10) 00:15:57.246 5.096 - 5.120: 97.9761% ( 5) 00:15:57.246 5.120 - 5.144: 97.9926% ( 2) 00:15:57.246 5.144 - 5.167: 98.0255% ( 4) 00:15:57.246 5.167 - 5.191: 98.0584% ( 4) 00:15:57.246 5.191 - 5.215: 98.0666% ( 1) 00:15:57.246 5.215 - 5.239: 98.0749% ( 1) 00:15:57.246 5.239 - 5.262: 98.0913% ( 2) 00:15:57.246 5.262 - 5.286: 98.1160% ( 3) 00:15:57.246 5.310 - 5.333: 98.1325% ( 2) 00:15:57.246 5.357 - 5.381: 98.1407% ( 1) 00:15:57.246 5.381 - 5.404: 98.1571% ( 2) 00:15:57.246 5.570 - 5.594: 98.1654% ( 1) 00:15:57.246 5.618 - 5.641: 98.1736% ( 1) 00:15:57.246 5.641 - 5.665: 98.1818% ( 1) 00:15:57.246 5.665 - 5.689: 98.1900% ( 1) 00:15:57.246 5.689 - 5.713: 98.1983% ( 1) 00:15:57.246 5.855 - 5.879: 98.2065% ( 1) 00:15:57.246 5.997 - 6.021: 98.2147% ( 1) 00:15:57.246 6.021 - 6.044: 98.2312% ( 2) 00:15:57.246 6.116 - 6.163: 98.2559% ( 3) 00:15:57.246 6.163 - 6.210: 98.2641% ( 1) 00:15:57.246 6.210 - 6.258: 98.2888% ( 3) 00:15:57.246 6.258 - 6.305: 98.2970% ( 1) 00:15:57.246 6.305 - 6.353: 98.3052% ( 1) 00:15:57.246 6.353 - 6.400: 98.3217% ( 2) 00:15:57.246 6.447 - 6.495: 98.3299% ( 1) 00:15:57.246 6.637 - 6.684: 98.3381% ( 1) 00:15:57.246 6.779 - 6.827: 98.3464% ( 1) 00:15:57.246 6.827 - 6.874: 98.3628% ( 2) 00:15:57.246 7.206 - 7.253: 98.3710% ( 1) 00:15:57.246 7.301 - 7.348: 98.3875% ( 2) 00:15:57.246 7.396 - 7.443: 98.3957% ( 1) 00:15:57.246 7.443 - 7.490: 98.4039% ( 1) 00:15:57.246 7.585 - 7.633: 98.4122% ( 1) 00:15:57.246 7.633 - 7.680: 98.4204% ( 1) 00:15:57.246 7.917 - 7.964: 98.4286% ( 1) 00:15:57.246 7.964 - 8.012: 98.4451% ( 2) 00:15:57.246 8.012 - 8.059: 98.4533% ( 1) 00:15:57.246 8.107 - 8.154: 98.4615% ( 1) 00:15:57.246 8.154 - 8.201: 98.4698% ( 1) 00:15:57.246 8.201 - 8.249: 98.4780% ( 1) 00:15:57.246 8.296 - 8.344: 98.4944% ( 2) 00:15:57.246 8.439 - 8.486: 98.5109% ( 2) 00:15:57.246 8.581 - 8.628: 98.5191% ( 1) 00:15:57.246 8.628 - 8.676: 98.5274% ( 1) 00:15:57.246 8.676 - 8.723: 98.5356% ( 1) 00:15:57.246 8.913 - 8.960: 98.5520% ( 2) 00:15:57.246 9.007 - 9.055: 98.5685% ( 2) 00:15:57.246 9.055 - 9.102: 98.5932% ( 3) 00:15:57.246 9.102 - 9.150: 98.6096% ( 2) 00:15:57.246 9.197 - 9.244: 98.6261% ( 2) 00:15:57.246 9.244 - 9.292: 98.6343% ( 1) 00:15:57.246 9.387 - 9.434: 98.6425% ( 1) 00:15:57.246 9.481 - 9.529: 98.6508% ( 1) 00:15:57.247 9.624 - 9.671: 98.6590% ( 1) 00:15:57.247 9.671 - 9.719: 98.6672% ( 1) 00:15:57.247 9.766 - 9.813: 98.6754% ( 1) 00:15:57.247 9.861 - 9.908: 98.6837% ( 1) 00:15:57.247 9.908 - 9.956: 98.6919% ( 1) 00:15:57.247 10.287 - 10.335: 98.7001% ( 1) 00:15:57.247 10.382 - 10.430: 98.7166% ( 2) 00:15:57.247 10.714 - 10.761: 98.7248% ( 1) 00:15:57.247 10.761 - 10.809: 98.7495% ( 3) 00:15:57.247 10.809 - 10.856: 98.7577% ( 1) 00:15:57.247 11.236 - 11.283: 98.7659% ( 1) 00:15:57.247 11.283 - 11.330: 98.7742% ( 1) 00:15:57.247 11.662 - 11.710: 98.7824% ( 1) 00:15:57.247 11.947 - 11.994: 98.7988% ( 2) 00:15:57.247 12.231 - 12.326: 98.8071% ( 1) 00:15:57.247 12.326 - 12.421: 98.8153% ( 1) 00:15:57.247 12.516 - 12.610: 98.8235% ( 1) 00:15:57.247 12.610 - 12.705: 98.8400% ( 2) 00:15:57.247 12.800 - 12.895: 98.8482% ( 1) 00:15:57.247 13.274 - 13.369: 98.8564% ( 1) 00:15:57.247 13.369 - 13.464: 98.8729% ( 2) 00:15:57.247 13.748 - 13.843: 98.8811% ( 1) 00:15:57.247 13.843 - 13.938: 98.9058% ( 3) 00:15:57.247 14.222 - 14.317: 98.9140% ( 1) 00:15:57.247 14.507 - 14.601: 98.9223% ( 1) 00:15:57.247 15.170 - 15.265: 98.9305% ( 1) 00:15:57.247 15.360 - 15.455: 98.9387% ( 1) 00:15:57.247 15.455 - 15.550: 98.9469% ( 1) 00:15:57.247 15.644 - 15.739: 98.9552% ( 1) 00:15:57.247 17.256 - 17.351: 98.9716% ( 2) 00:15:57.247 17.351 - 17.446: 98.9798% ( 1) 00:15:57.247 17.446 - 17.541: 99.0045% ( 3) 00:15:57.247 17.541 - 17.636: 99.0210% ( 2) 00:15:57.247 17.636 - 17.730: 99.0868% ( 8) 00:15:57.247 17.730 - 17.825: 99.1115% ( 3) 00:15:57.247 17.825 - 17.920: 99.1362% ( 3) 00:15:57.247 17.920 - 18.015: 99.1855% ( 6) 00:15:57.247 18.015 - 18.110: 99.2596% ( 9) 00:15:57.247 18.110 - 18.204: 99.3172% ( 7) 00:15:57.247 18.204 - 18.299: 99.3501% ( 4) 00:15:57.247 18.299 - 18.394: 99.4077% ( 7) 00:15:57.247 18.394 - 18.489: 99.5146% ( 13) 00:15:57.247 18.489 - 18.584: 99.6051% ( 11) 00:15:57.247 18.584 - 18.679: 99.6791% ( 9) 00:15:57.247 18.679 - 18.773: 99.7121% ( 4) 00:15:57.247 18.868 - 18.963: 99.7532% ( 5) 00:15:57.247 18.963 - 19.058: 99.7861% ( 4) 00:15:57.247 19.058 - 19.153: 99.8108% ( 3) 00:15:57.247 19.153 - 19.247: 99.8272% ( 2) 00:15:57.247 19.247 - 19.342: 99.8355% ( 1) 00:15:57.247 19.437 - 19.532: 99.8437% ( 1) 00:15:57.247 19.532 - 19.627: 99.8519% ( 1) 00:15:57.247 19.721 - 19.816: 99.8601% ( 1) 00:15:57.247 21.049 - 21.144: 99.8684% ( 1) 00:15:57.247 21.523 - 21.618: 99.8766% ( 1) 00:15:57.247 21.618 - 21.713: 99.8848% ( 1) 00:15:57.247 23.988 - 24.083: 99.8930% ( 1) 00:15:57.247 28.065 - 28.255: 99.9013% ( 1) 00:15:57.247 29.961 - 30.151: 99.9095% ( 1) 00:15:57.247 3980.705 - 4004.978: 99.9835% ( 9) 00:15:57.247 4004.978 - 4029.250: 100.0000% ( 2) 00:15:57.247 00:15:57.247 Complete histogram 00:15:57.247 ================== 00:15:57.247 Range in us Cumulative Count 00:15:57.247 2.062 - 2.074: 3.4965% ( 425) 00:15:57.247 2.074 - 2.086: 35.4833% ( 3888) 00:15:57.247 2.086 - 2.098: 40.1152% ( 563) 00:15:57.247 2.098 - 2.110: 47.3632% ( 881) 00:15:57.247 2.110 - 2.121: 58.4944% ( 1353) 00:15:57.247 2.121 - 2.133: 59.9835% ( 181) 00:15:57.247 2.133 - 2.145: 66.0633% ( 739) 00:15:57.247 2.145 - 2.157: 73.8132% ( 942) 00:15:57.247 2.157 - 2.169: 74.8252% ( 123) 00:15:57.247 2.169 - 2.181: 78.0173% ( 388) 00:15:57.247 2.181 - 2.193: 80.5512% ( 308) 00:15:57.247 2.193 - 2.204: 81.1765% ( 76) 00:15:57.247 2.204 - 2.216: 83.0687% ( 230) 00:15:57.247 2.216 - 2.228: 88.0049% ( 600) 00:15:57.247 2.228 - 2.240: 89.8067% ( 219) 00:15:57.247 2.240 - 2.252: 91.4356% ( 198) 00:15:57.247 2.252 - 2.264: 92.7520% ( 160) 00:15:57.247 2.264 - 2.276: 93.0070% ( 31) 00:15:57.247 2.276 - 2.287: 93.4677% ( 56) 00:15:57.247 2.287 - 2.299: 94.3809% ( 111) 00:15:57.247 2.299 - 2.311: 95.0638% ( 83) 00:15:57.247 2.311 - 2.323: 95.2036% ( 17) 00:15:57.247 2.323 - 2.335: 95.2201% ( 2) 00:15:57.247 2.335 - 2.347: 95.2530% ( 4) 00:15:57.247 2.347 - 2.359: 95.3353% ( 10) 00:15:57.247 2.359 - 2.370: 95.4175% ( 10) 00:15:57.247 2.370 - 2.382: 95.7219% ( 37) 00:15:57.247 2.382 - 2.394: 95.9687% ( 30) 00:15:57.247 2.394 - 2.406: 96.1826% ( 26) 00:15:57.247 2.406 - 2.418: 96.4130% ( 28) 00:15:57.247 2.418 - 2.430: 96.6598% ( 30) 00:15:57.247 2.430 - 2.441: 96.8737% ( 26) 00:15:57.247 2.441 - 2.453: 97.0383% ( 20) 00:15:57.247 2.453 - 2.465: 97.2439% ( 25) 00:15:57.247 2.465 - 2.477: 97.4249% ( 22) 00:15:57.247 2.477 - 2.489: 97.5730% ( 18) 00:15:57.247 2.489 - 2.501: 97.7458% ( 21) 00:15:57.247 2.501 - 2.513: 97.8281% ( 10) 00:15:57.247 2.513 - 2.524: 97.9515% ( 15) 00:15:57.247 2.524 - 2.536: 98.0502% ( 12) 00:15:57.247 2.536 - 2.548: 98.1078% ( 7) 00:15:57.247 2.548 - 2.560: 98.1654% ( 7) 00:15:57.247 2.560 - 2.572: 98.1983% ( 4) 00:15:57.247 2.572 - 2.584: 98.2559% ( 7) 00:15:57.247 2.584 - 2.596: 98.2723% ( 2) 00:15:57.247 2.596 - 2.607: 98.2888% ( 2) 00:15:57.247 2.631 - 2.643: 98.2970% ( 1) 00:15:57.247 2.690 - 2.702: 98.3052% ( 1) 00:15:57.247 2.750 - 2.761: 98.3135% ( 1) 00:15:57.247 2.809 - 2.821: 98.3217% ( 1) 00:15:57.247 2.880 - 2.892: 98.3299% ( 1) 00:15:57.247 3.081 - 3.105: 98.3381% ( 1) 00:15:57.247 3.319 - 3.342: 98.3464% ( 1) 00:15:57.247 3.366 - 3.390: 98.3628% ( 2) 00:15:57.247 3.390 - 3.413: 98.3710% ( 1) 00:15:57.248 3.413 - 3.437: 98.3875% ( 2) 00:15:57.248 3.461 - 3.484: 98.4039% ( 2) 00:15:57.248 3.484 - 3.508: 98.4122% ( 1) 00:15:57.248 3.650 - 3.674: 98.4369% ( 3) 00:15:57.248 3.698 - 3.721: 98.4451% ( 1) 00:15:57.248 3.721 - 3.745: 98.4615% ( 2) 00:15:57.248 3.793 - 3.816: 98.4780% ( 2) 00:15:57.248 3.816 - 3.840: 98.4862% ( 1) 00:15:57.248 3.864 - 3.887: 98.4944% ( 1) 00:15:57.248 3.911 - 3.935: 98.5027% ( 1) 00:15:57.248 3.935 - 3.959: 98.5109% ( 1) 00:15:57.248 3.959 - 3.982: 98.5274% ( 2) 00:15:57.248 4.077 - 4.101: 98.5356% ( 1) 00:15:57.248 4.172 - 4.196: 98.5438% ( 1) 00:15:57.248 5.215 - 5.239: 98.5520% ( 1) 00:15:57.248 5.570 - 5.594: 98.5603% ( 1) 00:15:57.248 5.902 - 5.926: 98.5685% ( 1) 00:15:57.248 5.950 - 5.973: 98.5767% ( 1) 00:15:57.248 5.973 - 5.997: 98.5849% ( 1) 00:15:57.248 6.068 - 6.116: 98.5932% ( 1) 00:15:57.248 6.447 - 6.495: 98.6014% ( 1) 00:15:57.248 6.637 - 6.684: 98.6096% ( 1) 00:15:57.248 6.684 - 6.732: 98.6179% ( 1) 00:15:57.248 6.779 - 6.827: 9[2024-12-09 10:26:29.254402] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:57.248 8.6261% ( 1) 00:15:57.248 6.874 - 6.921: 98.6343% ( 1) 00:15:57.248 6.921 - 6.969: 98.6425% ( 1) 00:15:57.248 7.016 - 7.064: 98.6508% ( 1) 00:15:57.248 7.159 - 7.206: 98.6590% ( 1) 00:15:57.248 7.396 - 7.443: 98.6672% ( 1) 00:15:57.248 7.490 - 7.538: 98.6754% ( 1) 00:15:57.248 7.585 - 7.633: 98.6837% ( 1) 00:15:57.248 7.964 - 8.012: 98.6919% ( 1) 00:15:57.248 8.913 - 8.960: 98.7001% ( 1) 00:15:57.248 9.624 - 9.671: 98.7084% ( 1) 00:15:57.248 9.766 - 9.813: 98.7166% ( 1) 00:15:57.248 12.326 - 12.421: 98.7248% ( 1) 00:15:57.248 14.791 - 14.886: 98.7330% ( 1) 00:15:57.248 15.550 - 15.644: 98.7413% ( 1) 00:15:57.248 15.644 - 15.739: 98.7495% ( 1) 00:15:57.248 15.739 - 15.834: 98.7659% ( 2) 00:15:57.248 15.834 - 15.929: 98.7742% ( 1) 00:15:57.248 15.929 - 16.024: 98.7824% ( 1) 00:15:57.248 16.024 - 16.119: 98.7988% ( 2) 00:15:57.248 16.119 - 16.213: 98.8318% ( 4) 00:15:57.248 16.213 - 16.308: 98.8400% ( 1) 00:15:57.248 16.308 - 16.403: 98.9140% ( 9) 00:15:57.248 16.403 - 16.498: 98.9387% ( 3) 00:15:57.248 16.498 - 16.593: 99.0128% ( 9) 00:15:57.248 16.593 - 16.687: 99.0703% ( 7) 00:15:57.248 16.687 - 16.782: 99.1032% ( 4) 00:15:57.248 16.782 - 16.877: 99.1608% ( 7) 00:15:57.248 16.877 - 16.972: 99.1855% ( 3) 00:15:57.248 16.972 - 17.067: 99.1937% ( 1) 00:15:57.248 17.161 - 17.256: 99.2102% ( 2) 00:15:57.248 17.256 - 17.351: 99.2349% ( 3) 00:15:57.248 17.446 - 17.541: 99.2431% ( 1) 00:15:57.248 17.541 - 17.636: 99.2513% ( 1) 00:15:57.248 17.636 - 17.730: 99.2596% ( 1) 00:15:57.248 17.730 - 17.825: 99.2760% ( 2) 00:15:57.248 17.825 - 17.920: 99.2842% ( 1) 00:15:57.248 18.110 - 18.204: 99.3089% ( 3) 00:15:57.248 18.489 - 18.584: 99.3254% ( 2) 00:15:57.248 18.584 - 18.679: 99.3336% ( 1) 00:15:57.248 18.868 - 18.963: 99.3418% ( 1) 00:15:57.248 21.144 - 21.239: 99.3501% ( 1) 00:15:57.248 22.187 - 22.281: 99.3583% ( 1) 00:15:57.248 26.738 - 26.927: 99.3665% ( 1) 00:15:57.248 3665.161 - 3689.434: 99.3747% ( 1) 00:15:57.248 3980.705 - 4004.978: 99.7943% ( 51) 00:15:57.248 4004.978 - 4029.250: 99.9835% ( 23) 00:15:57.248 4029.250 - 4053.523: 99.9918% ( 1) 00:15:57.248 4077.796 - 4102.068: 100.0000% ( 1) 00:15:57.248 00:15:57.248 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:57.248 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:57.248 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:57.248 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:57.248 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:57.248 [ 00:15:57.248 { 00:15:57.248 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:57.248 "subtype": "Discovery", 00:15:57.248 "listen_addresses": [], 00:15:57.248 "allow_any_host": true, 00:15:57.248 "hosts": [] 00:15:57.248 }, 00:15:57.248 { 00:15:57.248 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:57.248 "subtype": "NVMe", 00:15:57.248 "listen_addresses": [ 00:15:57.248 { 00:15:57.248 "trtype": "VFIOUSER", 00:15:57.248 "adrfam": "IPv4", 00:15:57.248 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:57.248 "trsvcid": "0" 00:15:57.248 } 00:15:57.248 ], 00:15:57.248 "allow_any_host": true, 00:15:57.248 "hosts": [], 00:15:57.248 "serial_number": "SPDK1", 00:15:57.248 "model_number": "SPDK bdev Controller", 00:15:57.248 "max_namespaces": 32, 00:15:57.248 "min_cntlid": 1, 00:15:57.248 "max_cntlid": 65519, 00:15:57.248 "namespaces": [ 00:15:57.248 { 00:15:57.248 "nsid": 1, 00:15:57.248 "bdev_name": "Malloc1", 00:15:57.248 "name": "Malloc1", 00:15:57.248 "nguid": "A78B944BE9C140B8A2BA9C0E1AA1D2B3", 00:15:57.248 "uuid": "a78b944b-e9c1-40b8-a2ba-9c0e1aa1d2b3" 00:15:57.248 } 00:15:57.248 ] 00:15:57.248 }, 00:15:57.248 { 00:15:57.248 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:57.248 "subtype": "NVMe", 00:15:57.248 "listen_addresses": [ 00:15:57.248 { 00:15:57.248 "trtype": "VFIOUSER", 00:15:57.248 "adrfam": "IPv4", 00:15:57.248 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:57.248 "trsvcid": "0" 00:15:57.249 } 00:15:57.249 ], 00:15:57.249 "allow_any_host": true, 00:15:57.249 "hosts": [], 00:15:57.249 "serial_number": "SPDK2", 00:15:57.249 "model_number": "SPDK bdev Controller", 00:15:57.249 "max_namespaces": 32, 00:15:57.249 "min_cntlid": 1, 00:15:57.249 "max_cntlid": 65519, 00:15:57.249 "namespaces": [ 00:15:57.249 { 00:15:57.249 "nsid": 1, 00:15:57.249 "bdev_name": "Malloc2", 00:15:57.249 "name": "Malloc2", 00:15:57.249 "nguid": "DA7CC58D06704251B443A459CE40D530", 00:15:57.249 "uuid": "da7cc58d-0670-4251-b443-a459ce40d530" 00:15:57.249 } 00:15:57.249 ] 00:15:57.249 } 00:15:57.249 ] 00:15:57.249 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:57.249 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2519923 00:15:57.249 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:57.249 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:57.249 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:57.249 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.249 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:57.249 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:57.249 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:57.249 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:57.507 [2024-12-09 10:26:29.740616] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:57.507 Malloc3 00:15:57.507 10:26:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:57.764 [2024-12-09 10:26:30.141670] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:57.764 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:57.764 Asynchronous Event Request test 00:15:57.764 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:57.764 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:57.764 Registering asynchronous event callbacks... 00:15:57.764 Starting namespace attribute notice tests for all controllers... 00:15:57.764 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:57.764 aer_cb - Changed Namespace 00:15:57.764 Cleaning up... 00:15:58.021 [ 00:15:58.021 { 00:15:58.021 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:58.021 "subtype": "Discovery", 00:15:58.021 "listen_addresses": [], 00:15:58.021 "allow_any_host": true, 00:15:58.021 "hosts": [] 00:15:58.021 }, 00:15:58.021 { 00:15:58.021 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:58.021 "subtype": "NVMe", 00:15:58.021 "listen_addresses": [ 00:15:58.021 { 00:15:58.021 "trtype": "VFIOUSER", 00:15:58.021 "adrfam": "IPv4", 00:15:58.021 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:58.021 "trsvcid": "0" 00:15:58.021 } 00:15:58.021 ], 00:15:58.021 "allow_any_host": true, 00:15:58.021 "hosts": [], 00:15:58.021 "serial_number": "SPDK1", 00:15:58.021 "model_number": "SPDK bdev Controller", 00:15:58.021 "max_namespaces": 32, 00:15:58.021 "min_cntlid": 1, 00:15:58.021 "max_cntlid": 65519, 00:15:58.021 "namespaces": [ 00:15:58.021 { 00:15:58.021 "nsid": 1, 00:15:58.021 "bdev_name": "Malloc1", 00:15:58.021 "name": "Malloc1", 00:15:58.021 "nguid": "A78B944BE9C140B8A2BA9C0E1AA1D2B3", 00:15:58.021 "uuid": "a78b944b-e9c1-40b8-a2ba-9c0e1aa1d2b3" 00:15:58.021 }, 00:15:58.021 { 00:15:58.021 "nsid": 2, 00:15:58.021 "bdev_name": "Malloc3", 00:15:58.021 "name": "Malloc3", 00:15:58.021 "nguid": "2DF019FBDA10421CAFED9CF2FCE022AF", 00:15:58.021 "uuid": "2df019fb-da10-421c-afed-9cf2fce022af" 00:15:58.021 } 00:15:58.021 ] 00:15:58.021 }, 00:15:58.021 { 00:15:58.021 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:58.021 "subtype": "NVMe", 00:15:58.021 "listen_addresses": [ 00:15:58.021 { 00:15:58.021 "trtype": "VFIOUSER", 00:15:58.021 "adrfam": "IPv4", 00:15:58.022 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:58.022 "trsvcid": "0" 00:15:58.022 } 00:15:58.022 ], 00:15:58.022 "allow_any_host": true, 00:15:58.022 "hosts": [], 00:15:58.022 "serial_number": "SPDK2", 00:15:58.022 "model_number": "SPDK bdev Controller", 00:15:58.022 "max_namespaces": 32, 00:15:58.022 "min_cntlid": 1, 00:15:58.022 "max_cntlid": 65519, 00:15:58.022 "namespaces": [ 00:15:58.022 { 00:15:58.022 "nsid": 1, 00:15:58.022 "bdev_name": "Malloc2", 00:15:58.022 "name": "Malloc2", 00:15:58.022 "nguid": "DA7CC58D06704251B443A459CE40D530", 00:15:58.022 "uuid": "da7cc58d-0670-4251-b443-a459ce40d530" 00:15:58.022 } 00:15:58.022 ] 00:15:58.022 } 00:15:58.022 ] 00:15:58.022 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2519923 00:15:58.022 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:58.022 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:58.022 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:58.022 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:58.022 [2024-12-09 10:26:30.462592] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:15:58.022 [2024-12-09 10:26:30.462639] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520059 ] 00:15:58.281 [2024-12-09 10:26:30.509998] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:58.281 [2024-12-09 10:26:30.518433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:58.281 [2024-12-09 10:26:30.518481] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd8dd3a8000 00:15:58.281 [2024-12-09 10:26:30.519460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.281 [2024-12-09 10:26:30.520452] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.281 [2024-12-09 10:26:30.521439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.281 [2024-12-09 10:26:30.522446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:58.281 [2024-12-09 10:26:30.523469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:58.281 [2024-12-09 10:26:30.524487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.281 [2024-12-09 10:26:30.525479] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:58.281 [2024-12-09 10:26:30.526488] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:58.281 [2024-12-09 10:26:30.527497] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:58.281 [2024-12-09 10:26:30.527519] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd8dd39d000 00:15:58.281 [2024-12-09 10:26:30.528971] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:58.281 [2024-12-09 10:26:30.546010] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:58.281 [2024-12-09 10:26:30.546045] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:58.281 [2024-12-09 10:26:30.551163] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:58.281 [2024-12-09 10:26:30.551217] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:58.281 [2024-12-09 10:26:30.551304] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:58.281 [2024-12-09 10:26:30.551333] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:58.281 [2024-12-09 10:26:30.551345] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:58.281 [2024-12-09 10:26:30.552197] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:58.281 [2024-12-09 10:26:30.552224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:58.281 [2024-12-09 10:26:30.552239] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:58.281 [2024-12-09 10:26:30.553197] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:58.281 [2024-12-09 10:26:30.553219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:58.281 [2024-12-09 10:26:30.553233] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:58.281 [2024-12-09 10:26:30.554208] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:58.281 [2024-12-09 10:26:30.554231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:58.281 [2024-12-09 10:26:30.555217] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:58.281 [2024-12-09 10:26:30.555238] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:58.281 [2024-12-09 10:26:30.555248] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:58.281 [2024-12-09 10:26:30.555260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:58.281 [2024-12-09 10:26:30.555370] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:58.281 [2024-12-09 10:26:30.555378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:58.281 [2024-12-09 10:26:30.555386] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:58.281 [2024-12-09 10:26:30.556243] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:58.281 [2024-12-09 10:26:30.557250] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:58.281 [2024-12-09 10:26:30.558260] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:58.281 [2024-12-09 10:26:30.559251] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:58.281 [2024-12-09 10:26:30.559323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:58.281 [2024-12-09 10:26:30.560266] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:58.281 [2024-12-09 10:26:30.560286] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:58.281 [2024-12-09 10:26:30.560296] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:58.281 [2024-12-09 10:26:30.560324] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:58.281 [2024-12-09 10:26:30.560342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:58.281 [2024-12-09 10:26:30.560366] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:58.281 [2024-12-09 10:26:30.560376] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:58.281 [2024-12-09 10:26:30.560383] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.281 [2024-12-09 10:26:30.560399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:58.281 [2024-12-09 10:26:30.569154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:58.281 [2024-12-09 10:26:30.569181] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:58.281 [2024-12-09 10:26:30.569191] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:58.281 [2024-12-09 10:26:30.569199] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:58.281 [2024-12-09 10:26:30.569206] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:58.281 [2024-12-09 10:26:30.569214] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:58.281 [2024-12-09 10:26:30.569221] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:58.281 [2024-12-09 10:26:30.569229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:58.281 [2024-12-09 10:26:30.569241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:58.281 [2024-12-09 10:26:30.569256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:58.281 [2024-12-09 10:26:30.577152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:58.281 [2024-12-09 10:26:30.577208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.282 [2024-12-09 10:26:30.577223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.282 [2024-12-09 10:26:30.577235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.282 [2024-12-09 10:26:30.577248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:58.282 [2024-12-09 10:26:30.577257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.577274] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.577290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.585152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.585175] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:58.282 [2024-12-09 10:26:30.585185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.585198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.585207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.585221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.593150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.593235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.593253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.593267] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:58.282 [2024-12-09 10:26:30.593276] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:58.282 [2024-12-09 10:26:30.593282] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.282 [2024-12-09 10:26:30.593291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.601168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.601191] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:58.282 [2024-12-09 10:26:30.601211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.601226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.601239] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:58.282 [2024-12-09 10:26:30.601247] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:58.282 [2024-12-09 10:26:30.601253] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.282 [2024-12-09 10:26:30.601263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.609167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.609195] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.609211] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.609225] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:58.282 [2024-12-09 10:26:30.609234] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:58.282 [2024-12-09 10:26:30.609240] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.282 [2024-12-09 10:26:30.609253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.617168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.617189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.617202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.617215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.617228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.617236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.617247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.617255] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:58.282 [2024-12-09 10:26:30.617262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:58.282 [2024-12-09 10:26:30.617270] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:58.282 [2024-12-09 10:26:30.617294] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.625166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.625193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.633164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.633191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.641152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.641177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.649165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.649197] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:58.282 [2024-12-09 10:26:30.649209] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:58.282 [2024-12-09 10:26:30.649215] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:58.282 [2024-12-09 10:26:30.649221] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:58.282 [2024-12-09 10:26:30.649226] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:58.282 [2024-12-09 10:26:30.649236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:58.282 [2024-12-09 10:26:30.649248] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:58.282 [2024-12-09 10:26:30.649260] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:58.282 [2024-12-09 10:26:30.649267] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.282 [2024-12-09 10:26:30.649276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.649287] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:58.282 [2024-12-09 10:26:30.649295] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:58.282 [2024-12-09 10:26:30.649301] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.282 [2024-12-09 10:26:30.649310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.649322] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:58.282 [2024-12-09 10:26:30.649330] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:58.282 [2024-12-09 10:26:30.649336] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:58.282 [2024-12-09 10:26:30.649344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:58.282 [2024-12-09 10:26:30.657154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.657182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.657200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:58.282 [2024-12-09 10:26:30.657212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:58.282 ===================================================== 00:15:58.282 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:58.282 ===================================================== 00:15:58.282 Controller Capabilities/Features 00:15:58.282 ================================ 00:15:58.282 Vendor ID: 4e58 00:15:58.282 Subsystem Vendor ID: 4e58 00:15:58.282 Serial Number: SPDK2 00:15:58.282 Model Number: SPDK bdev Controller 00:15:58.282 Firmware Version: 25.01 00:15:58.282 Recommended Arb Burst: 6 00:15:58.282 IEEE OUI Identifier: 8d 6b 50 00:15:58.282 Multi-path I/O 00:15:58.282 May have multiple subsystem ports: Yes 00:15:58.282 May have multiple controllers: Yes 00:15:58.282 Associated with SR-IOV VF: No 00:15:58.282 Max Data Transfer Size: 131072 00:15:58.282 Max Number of Namespaces: 32 00:15:58.283 Max Number of I/O Queues: 127 00:15:58.283 NVMe Specification Version (VS): 1.3 00:15:58.283 NVMe Specification Version (Identify): 1.3 00:15:58.283 Maximum Queue Entries: 256 00:15:58.283 Contiguous Queues Required: Yes 00:15:58.283 Arbitration Mechanisms Supported 00:15:58.283 Weighted Round Robin: Not Supported 00:15:58.283 Vendor Specific: Not Supported 00:15:58.283 Reset Timeout: 15000 ms 00:15:58.283 Doorbell Stride: 4 bytes 00:15:58.283 NVM Subsystem Reset: Not Supported 00:15:58.283 Command Sets Supported 00:15:58.283 NVM Command Set: Supported 00:15:58.283 Boot Partition: Not Supported 00:15:58.283 Memory Page Size Minimum: 4096 bytes 00:15:58.283 Memory Page Size Maximum: 4096 bytes 00:15:58.283 Persistent Memory Region: Not Supported 00:15:58.283 Optional Asynchronous Events Supported 00:15:58.283 Namespace Attribute Notices: Supported 00:15:58.283 Firmware Activation Notices: Not Supported 00:15:58.283 ANA Change Notices: Not Supported 00:15:58.283 PLE Aggregate Log Change Notices: Not Supported 00:15:58.283 LBA Status Info Alert Notices: Not Supported 00:15:58.283 EGE Aggregate Log Change Notices: Not Supported 00:15:58.283 Normal NVM Subsystem Shutdown event: Not Supported 00:15:58.283 Zone Descriptor Change Notices: Not Supported 00:15:58.283 Discovery Log Change Notices: Not Supported 00:15:58.283 Controller Attributes 00:15:58.283 128-bit Host Identifier: Supported 00:15:58.283 Non-Operational Permissive Mode: Not Supported 00:15:58.283 NVM Sets: Not Supported 00:15:58.283 Read Recovery Levels: Not Supported 00:15:58.283 Endurance Groups: Not Supported 00:15:58.283 Predictable Latency Mode: Not Supported 00:15:58.283 Traffic Based Keep ALive: Not Supported 00:15:58.283 Namespace Granularity: Not Supported 00:15:58.283 SQ Associations: Not Supported 00:15:58.283 UUID List: Not Supported 00:15:58.283 Multi-Domain Subsystem: Not Supported 00:15:58.283 Fixed Capacity Management: Not Supported 00:15:58.283 Variable Capacity Management: Not Supported 00:15:58.283 Delete Endurance Group: Not Supported 00:15:58.283 Delete NVM Set: Not Supported 00:15:58.283 Extended LBA Formats Supported: Not Supported 00:15:58.283 Flexible Data Placement Supported: Not Supported 00:15:58.283 00:15:58.283 Controller Memory Buffer Support 00:15:58.283 ================================ 00:15:58.283 Supported: No 00:15:58.283 00:15:58.283 Persistent Memory Region Support 00:15:58.283 ================================ 00:15:58.283 Supported: No 00:15:58.283 00:15:58.283 Admin Command Set Attributes 00:15:58.283 ============================ 00:15:58.283 Security Send/Receive: Not Supported 00:15:58.283 Format NVM: Not Supported 00:15:58.283 Firmware Activate/Download: Not Supported 00:15:58.283 Namespace Management: Not Supported 00:15:58.283 Device Self-Test: Not Supported 00:15:58.283 Directives: Not Supported 00:15:58.283 NVMe-MI: Not Supported 00:15:58.283 Virtualization Management: Not Supported 00:15:58.283 Doorbell Buffer Config: Not Supported 00:15:58.283 Get LBA Status Capability: Not Supported 00:15:58.283 Command & Feature Lockdown Capability: Not Supported 00:15:58.283 Abort Command Limit: 4 00:15:58.283 Async Event Request Limit: 4 00:15:58.283 Number of Firmware Slots: N/A 00:15:58.283 Firmware Slot 1 Read-Only: N/A 00:15:58.283 Firmware Activation Without Reset: N/A 00:15:58.283 Multiple Update Detection Support: N/A 00:15:58.283 Firmware Update Granularity: No Information Provided 00:15:58.283 Per-Namespace SMART Log: No 00:15:58.283 Asymmetric Namespace Access Log Page: Not Supported 00:15:58.283 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:58.283 Command Effects Log Page: Supported 00:15:58.283 Get Log Page Extended Data: Supported 00:15:58.283 Telemetry Log Pages: Not Supported 00:15:58.283 Persistent Event Log Pages: Not Supported 00:15:58.283 Supported Log Pages Log Page: May Support 00:15:58.283 Commands Supported & Effects Log Page: Not Supported 00:15:58.283 Feature Identifiers & Effects Log Page:May Support 00:15:58.283 NVMe-MI Commands & Effects Log Page: May Support 00:15:58.283 Data Area 4 for Telemetry Log: Not Supported 00:15:58.283 Error Log Page Entries Supported: 128 00:15:58.283 Keep Alive: Supported 00:15:58.283 Keep Alive Granularity: 10000 ms 00:15:58.283 00:15:58.283 NVM Command Set Attributes 00:15:58.283 ========================== 00:15:58.283 Submission Queue Entry Size 00:15:58.283 Max: 64 00:15:58.283 Min: 64 00:15:58.283 Completion Queue Entry Size 00:15:58.283 Max: 16 00:15:58.283 Min: 16 00:15:58.283 Number of Namespaces: 32 00:15:58.283 Compare Command: Supported 00:15:58.283 Write Uncorrectable Command: Not Supported 00:15:58.283 Dataset Management Command: Supported 00:15:58.283 Write Zeroes Command: Supported 00:15:58.283 Set Features Save Field: Not Supported 00:15:58.283 Reservations: Not Supported 00:15:58.283 Timestamp: Not Supported 00:15:58.283 Copy: Supported 00:15:58.283 Volatile Write Cache: Present 00:15:58.283 Atomic Write Unit (Normal): 1 00:15:58.283 Atomic Write Unit (PFail): 1 00:15:58.283 Atomic Compare & Write Unit: 1 00:15:58.283 Fused Compare & Write: Supported 00:15:58.283 Scatter-Gather List 00:15:58.283 SGL Command Set: Supported (Dword aligned) 00:15:58.283 SGL Keyed: Not Supported 00:15:58.283 SGL Bit Bucket Descriptor: Not Supported 00:15:58.283 SGL Metadata Pointer: Not Supported 00:15:58.283 Oversized SGL: Not Supported 00:15:58.283 SGL Metadata Address: Not Supported 00:15:58.283 SGL Offset: Not Supported 00:15:58.283 Transport SGL Data Block: Not Supported 00:15:58.283 Replay Protected Memory Block: Not Supported 00:15:58.283 00:15:58.283 Firmware Slot Information 00:15:58.283 ========================= 00:15:58.283 Active slot: 1 00:15:58.283 Slot 1 Firmware Revision: 25.01 00:15:58.283 00:15:58.283 00:15:58.283 Commands Supported and Effects 00:15:58.283 ============================== 00:15:58.283 Admin Commands 00:15:58.283 -------------- 00:15:58.283 Get Log Page (02h): Supported 00:15:58.283 Identify (06h): Supported 00:15:58.283 Abort (08h): Supported 00:15:58.283 Set Features (09h): Supported 00:15:58.283 Get Features (0Ah): Supported 00:15:58.283 Asynchronous Event Request (0Ch): Supported 00:15:58.283 Keep Alive (18h): Supported 00:15:58.283 I/O Commands 00:15:58.283 ------------ 00:15:58.283 Flush (00h): Supported LBA-Change 00:15:58.283 Write (01h): Supported LBA-Change 00:15:58.283 Read (02h): Supported 00:15:58.283 Compare (05h): Supported 00:15:58.283 Write Zeroes (08h): Supported LBA-Change 00:15:58.283 Dataset Management (09h): Supported LBA-Change 00:15:58.283 Copy (19h): Supported LBA-Change 00:15:58.283 00:15:58.283 Error Log 00:15:58.283 ========= 00:15:58.283 00:15:58.283 Arbitration 00:15:58.283 =========== 00:15:58.283 Arbitration Burst: 1 00:15:58.283 00:15:58.283 Power Management 00:15:58.283 ================ 00:15:58.283 Number of Power States: 1 00:15:58.283 Current Power State: Power State #0 00:15:58.283 Power State #0: 00:15:58.283 Max Power: 0.00 W 00:15:58.283 Non-Operational State: Operational 00:15:58.283 Entry Latency: Not Reported 00:15:58.283 Exit Latency: Not Reported 00:15:58.283 Relative Read Throughput: 0 00:15:58.283 Relative Read Latency: 0 00:15:58.283 Relative Write Throughput: 0 00:15:58.283 Relative Write Latency: 0 00:15:58.283 Idle Power: Not Reported 00:15:58.283 Active Power: Not Reported 00:15:58.283 Non-Operational Permissive Mode: Not Supported 00:15:58.283 00:15:58.283 Health Information 00:15:58.283 ================== 00:15:58.283 Critical Warnings: 00:15:58.283 Available Spare Space: OK 00:15:58.283 Temperature: OK 00:15:58.283 Device Reliability: OK 00:15:58.283 Read Only: No 00:15:58.283 Volatile Memory Backup: OK 00:15:58.283 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:58.283 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:58.283 Available Spare: 0% 00:15:58.283 Available Sp[2024-12-09 10:26:30.657329] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:58.283 [2024-12-09 10:26:30.665153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:58.283 [2024-12-09 10:26:30.665204] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:58.283 [2024-12-09 10:26:30.665222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.283 [2024-12-09 10:26:30.665233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.283 [2024-12-09 10:26:30.665243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.283 [2024-12-09 10:26:30.665252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:58.283 [2024-12-09 10:26:30.665341] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:58.284 [2024-12-09 10:26:30.665362] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:58.284 [2024-12-09 10:26:30.666342] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:58.284 [2024-12-09 10:26:30.666414] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:58.284 [2024-12-09 10:26:30.666444] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:58.284 [2024-12-09 10:26:30.667356] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:58.284 [2024-12-09 10:26:30.667386] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:58.284 [2024-12-09 10:26:30.667455] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:58.284 [2024-12-09 10:26:30.668671] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:58.542 are Threshold: 0% 00:15:58.542 Life Percentage Used: 0% 00:15:58.542 Data Units Read: 0 00:15:58.542 Data Units Written: 0 00:15:58.542 Host Read Commands: 0 00:15:58.542 Host Write Commands: 0 00:15:58.542 Controller Busy Time: 0 minutes 00:15:58.542 Power Cycles: 0 00:15:58.542 Power On Hours: 0 hours 00:15:58.542 Unsafe Shutdowns: 0 00:15:58.542 Unrecoverable Media Errors: 0 00:15:58.542 Lifetime Error Log Entries: 0 00:15:58.542 Warning Temperature Time: 0 minutes 00:15:58.542 Critical Temperature Time: 0 minutes 00:15:58.542 00:15:58.542 Number of Queues 00:15:58.542 ================ 00:15:58.542 Number of I/O Submission Queues: 127 00:15:58.542 Number of I/O Completion Queues: 127 00:15:58.542 00:15:58.542 Active Namespaces 00:15:58.542 ================= 00:15:58.542 Namespace ID:1 00:15:58.542 Error Recovery Timeout: Unlimited 00:15:58.542 Command Set Identifier: NVM (00h) 00:15:58.542 Deallocate: Supported 00:15:58.542 Deallocated/Unwritten Error: Not Supported 00:15:58.542 Deallocated Read Value: Unknown 00:15:58.542 Deallocate in Write Zeroes: Not Supported 00:15:58.542 Deallocated Guard Field: 0xFFFF 00:15:58.542 Flush: Supported 00:15:58.542 Reservation: Supported 00:15:58.542 Namespace Sharing Capabilities: Multiple Controllers 00:15:58.542 Size (in LBAs): 131072 (0GiB) 00:15:58.542 Capacity (in LBAs): 131072 (0GiB) 00:15:58.542 Utilization (in LBAs): 131072 (0GiB) 00:15:58.542 NGUID: DA7CC58D06704251B443A459CE40D530 00:15:58.542 UUID: da7cc58d-0670-4251-b443-a459ce40d530 00:15:58.542 Thin Provisioning: Not Supported 00:15:58.542 Per-NS Atomic Units: Yes 00:15:58.542 Atomic Boundary Size (Normal): 0 00:15:58.542 Atomic Boundary Size (PFail): 0 00:15:58.542 Atomic Boundary Offset: 0 00:15:58.542 Maximum Single Source Range Length: 65535 00:15:58.542 Maximum Copy Length: 65535 00:15:58.542 Maximum Source Range Count: 1 00:15:58.542 NGUID/EUI64 Never Reused: No 00:15:58.542 Namespace Write Protected: No 00:15:58.542 Number of LBA Formats: 1 00:15:58.542 Current LBA Format: LBA Format #00 00:15:58.542 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:58.542 00:15:58.542 10:26:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:58.800 [2024-12-09 10:26:31.006250] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:04.056 Initializing NVMe Controllers 00:16:04.056 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:04.056 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:04.056 Initialization complete. Launching workers. 00:16:04.056 ======================================================== 00:16:04.056 Latency(us) 00:16:04.056 Device Information : IOPS MiB/s Average min max 00:16:04.056 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30955.00 120.92 4137.20 1218.05 11285.25 00:16:04.056 ======================================================== 00:16:04.056 Total : 30955.00 120.92 4137.20 1218.05 11285.25 00:16:04.056 00:16:04.056 [2024-12-09 10:26:36.119509] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:04.056 10:26:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:04.056 [2024-12-09 10:26:36.452504] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:09.390 Initializing NVMe Controllers 00:16:09.390 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:09.390 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:09.390 Initialization complete. Launching workers. 00:16:09.390 ======================================================== 00:16:09.390 Latency(us) 00:16:09.390 Device Information : IOPS MiB/s Average min max 00:16:09.390 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 28329.26 110.66 4517.32 1248.02 10575.88 00:16:09.390 ======================================================== 00:16:09.390 Total : 28329.26 110.66 4517.32 1248.02 10575.88 00:16:09.390 00:16:09.390 [2024-12-09 10:26:41.471563] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:09.390 10:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:09.390 [2024-12-09 10:26:41.790845] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:14.661 [2024-12-09 10:26:46.918321] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:14.661 Initializing NVMe Controllers 00:16:14.661 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:14.661 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:14.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:14.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:14.661 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:14.661 Initialization complete. Launching workers. 00:16:14.661 Starting thread on core 2 00:16:14.661 Starting thread on core 3 00:16:14.661 Starting thread on core 1 00:16:14.662 10:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:14.919 [2024-12-09 10:26:47.323333] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.196 [2024-12-09 10:26:50.400500] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.196 Initializing NVMe Controllers 00:16:18.196 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.196 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.196 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:18.196 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:18.196 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:18.196 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:18.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:18.196 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:18.196 Initialization complete. Launching workers. 00:16:18.196 Starting thread on core 1 with urgent priority queue 00:16:18.196 Starting thread on core 2 with urgent priority queue 00:16:18.196 Starting thread on core 3 with urgent priority queue 00:16:18.196 Starting thread on core 0 with urgent priority queue 00:16:18.196 SPDK bdev Controller (SPDK2 ) core 0: 5732.33 IO/s 17.44 secs/100000 ios 00:16:18.196 SPDK bdev Controller (SPDK2 ) core 1: 5468.33 IO/s 18.29 secs/100000 ios 00:16:18.196 SPDK bdev Controller (SPDK2 ) core 2: 5090.00 IO/s 19.65 secs/100000 ios 00:16:18.196 SPDK bdev Controller (SPDK2 ) core 3: 5801.33 IO/s 17.24 secs/100000 ios 00:16:18.196 ======================================================== 00:16:18.196 00:16:18.196 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:18.453 [2024-12-09 10:26:50.798657] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:18.453 Initializing NVMe Controllers 00:16:18.453 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.453 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:18.453 Namespace ID: 1 size: 0GB 00:16:18.453 Initialization complete. 00:16:18.453 INFO: using host memory buffer for IO 00:16:18.453 Hello world! 00:16:18.453 [2024-12-09 10:26:50.807836] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:18.710 10:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:18.966 [2024-12-09 10:26:51.200528] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:19.897 Initializing NVMe Controllers 00:16:19.897 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:19.897 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:19.897 Initialization complete. Launching workers. 00:16:19.897 submit (in ns) avg, min, max = 6217.0, 3528.9, 4020210.0 00:16:19.897 complete (in ns) avg, min, max = 27829.9, 2070.0, 4019800.0 00:16:19.897 00:16:19.897 Submit histogram 00:16:19.897 ================ 00:16:19.897 Range in us Cumulative Count 00:16:19.897 3.508 - 3.532: 0.0081% ( 1) 00:16:19.897 3.532 - 3.556: 0.0486% ( 5) 00:16:19.897 3.556 - 3.579: 0.5585% ( 63) 00:16:19.897 3.579 - 3.603: 3.7475% ( 394) 00:16:19.897 3.603 - 3.627: 12.9907% ( 1142) 00:16:19.897 3.627 - 3.650: 25.2206% ( 1511) 00:16:19.897 3.650 - 3.674: 34.3828% ( 1132) 00:16:19.897 3.674 - 3.698: 41.7402% ( 909) 00:16:19.897 3.698 - 3.721: 48.7090% ( 861) 00:16:19.897 3.721 - 3.745: 55.4674% ( 835) 00:16:19.897 3.745 - 3.769: 60.8094% ( 660) 00:16:19.897 3.769 - 3.793: 65.2206% ( 545) 00:16:19.897 3.793 - 3.816: 68.8709% ( 451) 00:16:19.897 3.816 - 3.840: 71.3962% ( 312) 00:16:19.897 3.840 - 3.864: 74.8604% ( 428) 00:16:19.897 3.864 - 3.887: 79.1744% ( 533) 00:16:19.897 3.887 - 3.911: 82.7600% ( 443) 00:16:19.897 3.911 - 3.935: 85.6253% ( 354) 00:16:19.897 3.935 - 3.959: 87.5111% ( 233) 00:16:19.897 3.959 - 3.982: 89.3403% ( 226) 00:16:19.897 3.982 - 4.006: 90.9510% ( 199) 00:16:19.897 4.006 - 4.030: 92.1246% ( 145) 00:16:19.897 4.030 - 4.053: 93.0959% ( 120) 00:16:19.897 4.053 - 4.077: 93.8648% ( 95) 00:16:19.897 4.077 - 4.101: 94.5690% ( 87) 00:16:19.897 4.101 - 4.124: 95.0870% ( 64) 00:16:19.897 4.124 - 4.148: 95.3541% ( 33) 00:16:19.897 4.148 - 4.172: 95.6212% ( 33) 00:16:19.897 4.172 - 4.196: 95.7993% ( 22) 00:16:19.897 4.196 - 4.219: 95.8721% ( 9) 00:16:19.897 4.219 - 4.243: 96.0178% ( 18) 00:16:19.897 4.243 - 4.267: 96.1068% ( 11) 00:16:19.897 4.267 - 4.290: 96.2930% ( 23) 00:16:19.897 4.290 - 4.314: 96.4144% ( 15) 00:16:19.897 4.314 - 4.338: 96.5196% ( 13) 00:16:19.897 4.338 - 4.361: 96.6572% ( 17) 00:16:19.897 4.361 - 4.385: 96.7220% ( 8) 00:16:19.897 4.385 - 4.409: 96.8272% ( 13) 00:16:19.897 4.409 - 4.433: 96.8677% ( 5) 00:16:19.897 4.433 - 4.456: 96.9081% ( 5) 00:16:19.897 4.456 - 4.480: 96.9648% ( 7) 00:16:19.897 4.480 - 4.504: 97.0053% ( 5) 00:16:19.897 4.504 - 4.527: 97.0538% ( 6) 00:16:19.897 4.527 - 4.551: 97.0700% ( 2) 00:16:19.897 4.575 - 4.599: 97.0943% ( 3) 00:16:19.897 4.670 - 4.693: 97.1105% ( 2) 00:16:19.897 4.693 - 4.717: 97.1267% ( 2) 00:16:19.897 4.717 - 4.741: 97.1429% ( 2) 00:16:19.897 4.741 - 4.764: 97.1914% ( 6) 00:16:19.897 4.764 - 4.788: 97.1995% ( 1) 00:16:19.897 4.788 - 4.812: 97.2076% ( 1) 00:16:19.897 4.812 - 4.836: 97.2157% ( 1) 00:16:19.897 4.836 - 4.859: 97.2319% ( 2) 00:16:19.897 4.859 - 4.883: 97.2400% ( 1) 00:16:19.897 4.883 - 4.907: 97.2562% ( 2) 00:16:19.897 4.907 - 4.930: 97.3209% ( 8) 00:16:19.897 4.930 - 4.954: 97.3533% ( 4) 00:16:19.897 4.954 - 4.978: 97.4261% ( 9) 00:16:19.897 4.978 - 5.001: 97.4909% ( 8) 00:16:19.897 5.001 - 5.025: 97.5799% ( 11) 00:16:19.897 5.025 - 5.049: 97.6366% ( 7) 00:16:19.897 5.049 - 5.073: 97.6771% ( 5) 00:16:19.897 5.073 - 5.096: 97.7499% ( 9) 00:16:19.897 5.096 - 5.120: 97.7580% ( 1) 00:16:19.897 5.120 - 5.144: 97.7904% ( 4) 00:16:19.897 5.144 - 5.167: 97.8146% ( 3) 00:16:19.898 5.167 - 5.191: 97.8308% ( 2) 00:16:19.898 5.191 - 5.215: 97.8794% ( 6) 00:16:19.898 5.215 - 5.239: 97.9118% ( 4) 00:16:19.898 5.239 - 5.262: 97.9603% ( 6) 00:16:19.898 5.262 - 5.286: 97.9765% ( 2) 00:16:19.898 5.286 - 5.310: 98.0089% ( 4) 00:16:19.898 5.310 - 5.333: 98.0251% ( 2) 00:16:19.898 5.333 - 5.357: 98.0575% ( 4) 00:16:19.898 5.357 - 5.381: 98.0656% ( 1) 00:16:19.898 5.381 - 5.404: 98.0898% ( 3) 00:16:19.898 5.404 - 5.428: 98.1060% ( 2) 00:16:19.898 5.428 - 5.452: 98.1222% ( 2) 00:16:19.898 5.499 - 5.523: 98.1303% ( 1) 00:16:19.898 5.523 - 5.547: 98.1384% ( 1) 00:16:19.898 5.547 - 5.570: 98.1465% ( 1) 00:16:19.898 5.594 - 5.618: 98.1546% ( 1) 00:16:19.898 5.618 - 5.641: 98.1627% ( 1) 00:16:19.898 5.641 - 5.665: 98.1789% ( 2) 00:16:19.898 5.713 - 5.736: 98.1870% ( 1) 00:16:19.898 5.760 - 5.784: 98.1951% ( 1) 00:16:19.898 5.831 - 5.855: 98.2032% ( 1) 00:16:19.898 5.855 - 5.879: 98.2113% ( 1) 00:16:19.898 5.902 - 5.926: 98.2193% ( 1) 00:16:19.898 5.950 - 5.973: 98.2274% ( 1) 00:16:19.898 6.021 - 6.044: 98.2355% ( 1) 00:16:19.898 6.044 - 6.068: 98.2517% ( 2) 00:16:19.898 6.068 - 6.116: 98.2598% ( 1) 00:16:19.898 6.116 - 6.163: 98.2841% ( 3) 00:16:19.898 6.163 - 6.210: 98.3084% ( 3) 00:16:19.898 6.210 - 6.258: 98.3246% ( 2) 00:16:19.898 6.258 - 6.305: 98.3327% ( 1) 00:16:19.898 6.305 - 6.353: 98.3408% ( 1) 00:16:19.898 6.353 - 6.400: 98.3731% ( 4) 00:16:19.898 6.400 - 6.447: 98.3893% ( 2) 00:16:19.898 6.495 - 6.542: 98.4136% ( 3) 00:16:19.898 6.590 - 6.637: 98.4379% ( 3) 00:16:19.898 6.637 - 6.684: 98.4460% ( 1) 00:16:19.898 6.684 - 6.732: 98.4622% ( 2) 00:16:19.898 6.921 - 6.969: 98.4703% ( 1) 00:16:19.898 6.969 - 7.016: 98.4783% ( 1) 00:16:19.898 7.111 - 7.159: 98.4864% ( 1) 00:16:19.898 7.348 - 7.396: 98.4945% ( 1) 00:16:19.898 7.396 - 7.443: 98.5026% ( 1) 00:16:19.898 7.633 - 7.680: 98.5107% ( 1) 00:16:19.898 7.680 - 7.727: 98.5188% ( 1) 00:16:19.898 7.727 - 7.775: 98.5269% ( 1) 00:16:19.898 7.775 - 7.822: 98.5350% ( 1) 00:16:19.898 7.870 - 7.917: 98.5431% ( 1) 00:16:19.898 8.012 - 8.059: 98.5512% ( 1) 00:16:19.898 8.107 - 8.154: 98.5593% ( 1) 00:16:19.898 8.154 - 8.201: 98.5674% ( 1) 00:16:19.898 8.249 - 8.296: 98.5755% ( 1) 00:16:19.898 8.391 - 8.439: 98.5836% ( 1) 00:16:19.898 8.439 - 8.486: 98.5917% ( 1) 00:16:19.898 8.533 - 8.581: 98.5998% ( 1) 00:16:19.898 8.581 - 8.628: 98.6079% ( 1) 00:16:19.898 8.628 - 8.676: 98.6159% ( 1) 00:16:19.898 8.770 - 8.818: 98.6240% ( 1) 00:16:19.898 8.865 - 8.913: 98.6321% ( 1) 00:16:19.898 9.244 - 9.292: 98.6483% ( 2) 00:16:19.898 9.292 - 9.339: 98.6645% ( 2) 00:16:19.898 9.339 - 9.387: 98.6807% ( 2) 00:16:19.898 9.387 - 9.434: 98.6888% ( 1) 00:16:19.898 9.529 - 9.576: 98.6969% ( 1) 00:16:19.898 9.576 - 9.624: 98.7212% ( 3) 00:16:19.898 9.671 - 9.719: 98.7293% ( 1) 00:16:19.898 9.766 - 9.813: 98.7374% ( 1) 00:16:19.898 9.861 - 9.908: 98.7535% ( 2) 00:16:19.898 9.956 - 10.003: 98.7616% ( 1) 00:16:19.898 10.050 - 10.098: 98.7778% ( 2) 00:16:19.898 10.145 - 10.193: 98.8021% ( 3) 00:16:19.898 10.193 - 10.240: 98.8264% ( 3) 00:16:19.898 10.287 - 10.335: 98.8345% ( 1) 00:16:19.898 10.524 - 10.572: 98.8426% ( 1) 00:16:19.898 10.761 - 10.809: 98.8507% ( 1) 00:16:19.898 10.856 - 10.904: 98.8588% ( 1) 00:16:19.898 10.999 - 11.046: 98.8749% ( 2) 00:16:19.898 11.141 - 11.188: 98.8830% ( 1) 00:16:19.898 11.188 - 11.236: 98.8911% ( 1) 00:16:19.898 11.283 - 11.330: 98.8992% ( 1) 00:16:19.898 11.425 - 11.473: 98.9073% ( 1) 00:16:19.898 11.615 - 11.662: 98.9154% ( 1) 00:16:19.898 11.662 - 11.710: 98.9235% ( 1) 00:16:19.898 11.710 - 11.757: 98.9316% ( 1) 00:16:19.898 11.852 - 11.899: 98.9397% ( 1) 00:16:19.898 12.041 - 12.089: 98.9478% ( 1) 00:16:19.898 12.231 - 12.326: 98.9721% ( 3) 00:16:19.898 12.326 - 12.421: 98.9802% ( 1) 00:16:19.898 12.421 - 12.516: 98.9883% ( 1) 00:16:19.898 12.705 - 12.800: 98.9964% ( 1) 00:16:19.898 12.800 - 12.895: 99.0045% ( 1) 00:16:19.898 12.895 - 12.990: 99.0125% ( 1) 00:16:19.898 12.990 - 13.084: 99.0206% ( 1) 00:16:19.898 13.084 - 13.179: 99.0368% ( 2) 00:16:19.898 13.274 - 13.369: 99.0449% ( 1) 00:16:19.898 13.559 - 13.653: 99.0530% ( 1) 00:16:19.898 13.653 - 13.748: 99.0773% ( 3) 00:16:19.898 13.843 - 13.938: 99.0854% ( 1) 00:16:19.898 14.317 - 14.412: 99.0935% ( 1) 00:16:19.898 14.886 - 14.981: 99.1016% ( 1) 00:16:19.898 14.981 - 15.076: 99.1097% ( 1) 00:16:19.898 15.076 - 15.170: 99.1178% ( 1) 00:16:19.898 15.170 - 15.265: 99.1340% ( 2) 00:16:19.898 16.308 - 16.403: 99.1420% ( 1) 00:16:19.898 16.877 - 16.972: 99.1501% ( 1) 00:16:19.898 16.972 - 17.067: 99.1582% ( 1) 00:16:19.898 17.256 - 17.351: 99.1663% ( 1) 00:16:19.898 17.351 - 17.446: 99.1906% ( 3) 00:16:19.898 17.446 - 17.541: 99.1987% ( 1) 00:16:19.898 17.541 - 17.636: 99.2392% ( 5) 00:16:19.898 17.636 - 17.730: 99.2877% ( 6) 00:16:19.898 17.730 - 17.825: 99.3363% ( 6) 00:16:19.898 17.825 - 17.920: 99.4011% ( 8) 00:16:19.898 17.920 - 18.015: 99.4334% ( 4) 00:16:19.898 18.015 - 18.110: 99.4901% ( 7) 00:16:19.898 18.110 - 18.204: 99.5225% ( 4) 00:16:19.898 18.204 - 18.299: 99.5629% ( 5) 00:16:19.898 18.299 - 18.394: 99.6358% ( 9) 00:16:19.898 18.394 - 18.489: 99.7167% ( 10) 00:16:19.898 18.489 - 18.584: 99.7734% ( 7) 00:16:19.898 18.584 - 18.679: 99.8138% ( 5) 00:16:19.898 18.679 - 18.773: 99.8300% ( 2) 00:16:19.898 18.773 - 18.868: 99.8462% ( 2) 00:16:19.899 18.868 - 18.963: 99.8624% ( 2) 00:16:19.899 18.963 - 19.058: 99.8948% ( 4) 00:16:19.899 20.764 - 20.859: 99.9029% ( 1) 00:16:19.899 21.997 - 22.092: 99.9110% ( 1) 00:16:19.899 23.324 - 23.419: 99.9191% ( 1) 00:16:19.899 24.178 - 24.273: 99.9272% ( 1) 00:16:19.899 25.221 - 25.410: 99.9352% ( 1) 00:16:19.899 25.979 - 26.169: 99.9433% ( 1) 00:16:19.899 3980.705 - 4004.978: 99.9676% ( 3) 00:16:19.899 4004.978 - 4029.250: 100.0000% ( 4) 00:16:19.899 00:16:19.899 Complete histogram 00:16:19.899 ================== 00:16:19.899 Range in us Cumulative Count 00:16:19.899 2.062 - 2.074: 0.5504% ( 68) 00:16:19.899 2.074 - 2.086: 26.0785% ( 3154) 00:16:19.899 2.086 - 2.098: 38.0251% ( 1476) 00:16:19.899 2.098 - 2.110: 41.5864% ( 440) 00:16:19.899 2.110 - 2.121: 57.6690% ( 1987) 00:16:19.899 2.121 - 2.133: 61.1331% ( 428) 00:16:19.899 2.133 - 2.145: 64.5569% ( 423) 00:16:19.899 2.145 - 2.157: 75.7264% ( 1380) 00:16:19.899 2.157 - 2.169: 78.1870% ( 304) 00:16:19.899 2.169 - 2.181: 81.2788% ( 382) 00:16:19.899 2.181 - 2.193: 85.7871% ( 557) 00:16:19.899 2.193 - 2.204: 86.8393% ( 130) 00:16:19.899 2.204 - 2.216: 87.8430% ( 124) 00:16:19.899 2.216 - 2.228: 89.9150% ( 256) 00:16:19.899 2.228 - 2.240: 92.2056% ( 283) 00:16:19.899 2.240 - 2.252: 93.4601% ( 155) 00:16:19.899 2.252 - 2.264: 94.2857% ( 102) 00:16:19.899 2.264 - 2.276: 94.4800% ( 24) 00:16:19.899 2.276 - 2.287: 94.7390% ( 32) 00:16:19.899 2.287 - 2.299: 95.1032% ( 45) 00:16:19.899 2.299 - 2.311: 95.3136% ( 26) 00:16:19.899 2.311 - 2.323: 95.4674% ( 19) 00:16:19.899 2.323 - 2.335: 95.5079% ( 5) 00:16:19.899 2.335 - 2.347: 95.5160% ( 1) 00:16:19.899 2.347 - 2.359: 95.5403% ( 3) 00:16:19.899 2.359 - 2.370: 95.6050% ( 8) 00:16:19.899 2.370 - 2.382: 95.7021% ( 12) 00:16:19.899 2.382 - 2.394: 95.8316% ( 16) 00:16:19.899 2.394 - 2.406: 96.0016% ( 21) 00:16:19.899 2.406 - 2.418: 96.1149% ( 14) 00:16:19.899 2.418 - 2.430: 96.2606% ( 18) 00:16:19.899 2.430 - 2.441: 96.4306% ( 21) 00:16:19.899 2.441 - 2.453: 96.6006% ( 21) 00:16:19.899 2.453 - 2.465: 96.7382% ( 17) 00:16:19.899 2.465 - 2.477: 96.8677% ( 16) 00:16:19.899 2.477 - 2.489: 96.9243% ( 7) 00:16:19.899 2.489 - 2.501: 97.0457% ( 15) 00:16:19.899 2.501 - 2.513: 97.1590% ( 14) 00:16:19.899 2.513 - 2.524: 97.2562% ( 12) 00:16:19.899 2.524 - 2.536: 97.3371% ( 10) 00:16:19.899 2.536 - 2.548: 97.4100% ( 9) 00:16:19.899 2.548 - 2.560: 97.4342% ( 3) 00:16:19.899 2.560 - 2.572: 97.4585% ( 3) 00:16:19.899 2.572 - 2.584: 97.4990% ( 5) 00:16:19.899 2.596 - 2.607: 97.5314% ( 4) 00:16:19.899 2.607 - 2.619: 97.5718% ( 5) 00:16:19.899 2.619 - 2.631: 97.6204% ( 6) 00:16:19.899 2.631 - 2.643: 97.6690% ( 6) 00:16:19.899 2.643 - 2.655: 97.7175% ( 6) 00:16:19.899 2.655 - 2.667: 97.7823% ( 8) 00:16:19.899 2.667 - 2.679: 97.8308% ( 6) 00:16:19.899 2.679 - 2.690: 97.8794% ( 6) 00:16:19.899 2.690 - 2.702: 97.9037% ( 3) 00:16:19.899 2.702 - 2.714: 97.9522% ( 6) 00:16:19.899 2.714 - 2.726: 97.9603% ( 1) 00:16:19.899 2.726 - 2.738: 97.9927% ( 4) 00:16:19.899 2.738 - 2.750: 98.0089% ( 2) 00:16:19.899 2.761 - 2.773: 98.0251% ( 2) 00:16:19.899 2.785 - 2.797: 98.0413% ( 2) 00:16:19.899 2.797 - 2.809: 98.0575% ( 2) 00:16:19.899 2.809 - 2.821: 98.0656% ( 1) 00:16:19.899 2.821 - 2.833: 98.0737% ( 1) 00:16:19.899 2.844 - 2.856: 98.0898% ( 2) 00:16:19.899 2.856 - 2.868: 98.0979% ( 1) 00:16:19.899 2.868 - 2.880: 98.1141% ( 2) 00:16:19.899 2.880 - 2.892: 98.1222% ( 1) 00:16:19.899 2.892 - 2.904: 98.1384% ( 2) 00:16:19.899 2.904 - 2.916: 98.1546% ( 2) 00:16:19.899 2.927 - 2.939: 98.1627% ( 1) 00:16:19.899 2.939 - 2.951: 98.1708% ( 1) 00:16:19.899 2.963 - 2.975: 98.1789% ( 1) 00:16:19.899 2.987 - 2.999: 98.1870% ( 1) 00:16:19.899 2.999 - 3.010: 98.1951% ( 1) 00:16:19.899 3.022 - 3.034: 98.2032% ( 1) 00:16:19.899 3.034 - 3.058: 98.2193% ( 2) 00:16:19.899 3.081 - 3.105: 98.2355% ( 2) 00:16:19.899 3.153 - 3.176: 98.2436% ( 1) 00:16:19.899 3.176 - 3.200: 98.2598% ( 2) 00:16:19.899 3.200 - 3.224: 98.2679% ( 1) 00:16:19.899 3.247 - 3.271: 98.3003% ( 4) 00:16:19.899 3.295 - 3.319: 98.3084% ( 1) 00:16:19.899 3.342 - 3.366: 98.3165% ( 1) 00:16:19.899 3.366 - 3.390: 98.3246% ( 1) 00:16:19.899 3.413 - 3.437: 98.3327% ( 1) 00:16:19.899 3.461 - 3.484: 98.3408% ( 1) 00:16:19.899 3.508 - 3.532: 98.3488% ( 1) 00:16:19.899 3.532 - 3.556: 98.3569% ( 1) 00:16:19.899 3.556 - 3.579: 98.3650% ( 1) 00:16:19.899 3.627 - 3.650: 98.3731% ( 1) 00:16:19.899 3.650 - 3.674: 98.3893% ( 2) 00:16:19.899 3.698 - 3.721: 98.4055% ( 2) 00:16:19.899 3.721 - 3.745: 98.4136% ( 1) 00:16:19.899 3.745 - 3.769: 98.4460% ( 4) 00:16:19.899 3.769 - 3.793: 98.4703% ( 3) 00:16:19.899 3.793 - 3.816: 98.4783% ( 1) 00:16:19.899 3.816 - 3.840: 98.4945% ( 2) 00:16:19.899 3.864 - 3.887: 98.5026% ( 1) 00:16:19.899 3.887 - 3.911: 98.5188% ( 2) 00:16:19.899 3.911 - 3.935: 98.5269% ( 1) 00:16:19.899 3.935 - 3.959: 98.5350% ( 1) 00:16:19.899 3.959 - 3.982: 98.5593% ( 3) 00:16:19.899 4.030 - 4.053: 98.5674% ( 1) 00:16:19.899 4.101 - 4.124: 98.5755% ( 1) 00:16:19.899 4.148 - 4.172: 98.5917% ( 2) 00:16:19.899 4.196 - 4.219: 98.5998% ( 1) 00:16:19.899 4.219 - 4.243: 98.6079% ( 1) 00:16:19.899 4.243 - 4.267: 98.6159% ( 1) 00:16:19.899 4.267 - 4.290: 98.6240% ( 1) 00:16:19.899 4.290 - 4.314: 98.6321% ( 1) 00:16:19.899 4.361 - 4.385: 98.6402% ( 1) 00:16:19.899 4.385 - 4.409: 98.6483% ( 1) 00:16:19.899 4.433 - 4.456: 98.6564% ( 1) 00:16:19.899 4.693 - 4.717: 98.6645% ( 1) 00:16:19.899 5.167 - 5.191: 98.6726% ( 1) 00:16:19.899 5.831 - 5.855: 98.6807% ( 1) 00:16:19.899 6.921 - 6.969: 98.6888% ( 1) 00:16:19.899 7.206 - 7.253: 98.6969% ( 1) 00:16:19.899 7.633 - 7.680: 98.7050% ( 1) 00:16:19.899 8.012 - 8.059: 98.7131% ( 1) 00:16:19.899 8.107 - 8.154: 98.7212% ( 1) 00:16:19.899 8.249 - 8.296: 98.7293% ( 1) 00:16:19.899 9.007 - 9.055: 98.7374% ( 1) 00:16:19.899 9.055 - 9.102: 98.7454% ( 1) 00:16:19.899 9.244 - 9.292: 98.7535% ( 1) 00:16:19.899 9.387 - 9.434: 98.7697% ( 2) 00:16:19.899 9.529 - 9.576: 98.7778% ( 1) 00:16:19.899 9.719 - 9.766: 98.7859% ( 1) 00:16:19.899 9.861 - 9.908: 98.7940% ( 1) 00:16:19.899 10.477 - 10.524: 98.8021% ( 1) 00:16:19.899 12.041 - 12.089: 98.8102% ( 1) 00:16:19.899 15.076 - 15.170: 98.8183% ( 1) 00:16:19.899 15.550 - 15.644: 98.8264% ( 1) 00:16:19.899 15.644 - 15.739: 98.8345% ( 1) 00:16:19.899 15.834 - 15.929: 98.8669% ( 4) 00:16:19.899 15.929 - 16.024: 98.8992% ( 4) 00:16:19.899 16.024 - 16.119: 98.9640% ( 8) 00:16:19.899 16.119 - 16.213: 98.9721% ( 1) 00:16:19.899 16.213 - 16.308: 99.0206% ( 6) 00:16:19.899 16.308 - 16.403: 99.0611% ( 5) 00:16:19.899 16.403 - 16.498: 99.1016%[2024-12-09 10:26:52.299919] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.156 ( 5) 00:16:20.156 16.498 - 16.593: 99.1178% ( 2) 00:16:20.156 16.593 - 16.687: 99.1340% ( 2) 00:16:20.156 16.687 - 16.782: 99.1744% ( 5) 00:16:20.156 16.782 - 16.877: 99.2230% ( 6) 00:16:20.156 16.877 - 16.972: 99.2311% ( 1) 00:16:20.156 17.067 - 17.161: 99.2473% ( 2) 00:16:20.156 17.161 - 17.256: 99.2554% ( 1) 00:16:20.156 17.256 - 17.351: 99.2958% ( 5) 00:16:20.156 17.541 - 17.636: 99.3039% ( 1) 00:16:20.156 17.920 - 18.015: 99.3120% ( 1) 00:16:20.156 18.110 - 18.204: 99.3282% ( 2) 00:16:20.156 18.204 - 18.299: 99.3363% ( 1) 00:16:20.156 18.299 - 18.394: 99.3444% ( 1) 00:16:20.156 18.584 - 18.679: 99.3525% ( 1) 00:16:20.156 18.773 - 18.868: 99.3606% ( 1) 00:16:20.156 3980.705 - 4004.978: 99.8219% ( 57) 00:16:20.156 4004.978 - 4029.250: 100.0000% ( 22) 00:16:20.156 00:16:20.157 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:20.157 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:20.157 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:20.157 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:20.157 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:20.414 [ 00:16:20.414 { 00:16:20.414 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:20.414 "subtype": "Discovery", 00:16:20.414 "listen_addresses": [], 00:16:20.414 "allow_any_host": true, 00:16:20.414 "hosts": [] 00:16:20.414 }, 00:16:20.414 { 00:16:20.414 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:20.414 "subtype": "NVMe", 00:16:20.414 "listen_addresses": [ 00:16:20.414 { 00:16:20.414 "trtype": "VFIOUSER", 00:16:20.414 "adrfam": "IPv4", 00:16:20.414 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:20.414 "trsvcid": "0" 00:16:20.414 } 00:16:20.414 ], 00:16:20.414 "allow_any_host": true, 00:16:20.414 "hosts": [], 00:16:20.414 "serial_number": "SPDK1", 00:16:20.414 "model_number": "SPDK bdev Controller", 00:16:20.414 "max_namespaces": 32, 00:16:20.414 "min_cntlid": 1, 00:16:20.414 "max_cntlid": 65519, 00:16:20.414 "namespaces": [ 00:16:20.414 { 00:16:20.414 "nsid": 1, 00:16:20.414 "bdev_name": "Malloc1", 00:16:20.414 "name": "Malloc1", 00:16:20.414 "nguid": "A78B944BE9C140B8A2BA9C0E1AA1D2B3", 00:16:20.414 "uuid": "a78b944b-e9c1-40b8-a2ba-9c0e1aa1d2b3" 00:16:20.414 }, 00:16:20.414 { 00:16:20.414 "nsid": 2, 00:16:20.414 "bdev_name": "Malloc3", 00:16:20.414 "name": "Malloc3", 00:16:20.414 "nguid": "2DF019FBDA10421CAFED9CF2FCE022AF", 00:16:20.414 "uuid": "2df019fb-da10-421c-afed-9cf2fce022af" 00:16:20.414 } 00:16:20.414 ] 00:16:20.414 }, 00:16:20.414 { 00:16:20.414 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:20.414 "subtype": "NVMe", 00:16:20.414 "listen_addresses": [ 00:16:20.414 { 00:16:20.414 "trtype": "VFIOUSER", 00:16:20.414 "adrfam": "IPv4", 00:16:20.414 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:20.414 "trsvcid": "0" 00:16:20.414 } 00:16:20.414 ], 00:16:20.414 "allow_any_host": true, 00:16:20.414 "hosts": [], 00:16:20.414 "serial_number": "SPDK2", 00:16:20.414 "model_number": "SPDK bdev Controller", 00:16:20.414 "max_namespaces": 32, 00:16:20.414 "min_cntlid": 1, 00:16:20.414 "max_cntlid": 65519, 00:16:20.414 "namespaces": [ 00:16:20.414 { 00:16:20.414 "nsid": 1, 00:16:20.414 "bdev_name": "Malloc2", 00:16:20.414 "name": "Malloc2", 00:16:20.414 "nguid": "DA7CC58D06704251B443A459CE40D530", 00:16:20.414 "uuid": "da7cc58d-0670-4251-b443-a459ce40d530" 00:16:20.414 } 00:16:20.414 ] 00:16:20.414 } 00:16:20.414 ] 00:16:20.414 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:20.414 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2522677 00:16:20.414 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:20.414 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:20.414 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:20.414 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.414 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:20.414 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:20.414 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:20.414 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:20.414 [2024-12-09 10:26:52.843621] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:20.672 Malloc4 00:16:20.672 10:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:20.929 [2024-12-09 10:26:53.227617] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:20.929 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:20.929 Asynchronous Event Request test 00:16:20.929 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.929 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:20.929 Registering asynchronous event callbacks... 00:16:20.929 Starting namespace attribute notice tests for all controllers... 00:16:20.929 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:20.929 aer_cb - Changed Namespace 00:16:20.929 Cleaning up... 00:16:21.188 [ 00:16:21.188 { 00:16:21.188 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:21.188 "subtype": "Discovery", 00:16:21.188 "listen_addresses": [], 00:16:21.188 "allow_any_host": true, 00:16:21.188 "hosts": [] 00:16:21.188 }, 00:16:21.188 { 00:16:21.188 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:21.188 "subtype": "NVMe", 00:16:21.188 "listen_addresses": [ 00:16:21.188 { 00:16:21.188 "trtype": "VFIOUSER", 00:16:21.188 "adrfam": "IPv4", 00:16:21.188 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:21.188 "trsvcid": "0" 00:16:21.188 } 00:16:21.188 ], 00:16:21.188 "allow_any_host": true, 00:16:21.188 "hosts": [], 00:16:21.188 "serial_number": "SPDK1", 00:16:21.188 "model_number": "SPDK bdev Controller", 00:16:21.188 "max_namespaces": 32, 00:16:21.188 "min_cntlid": 1, 00:16:21.188 "max_cntlid": 65519, 00:16:21.188 "namespaces": [ 00:16:21.188 { 00:16:21.188 "nsid": 1, 00:16:21.188 "bdev_name": "Malloc1", 00:16:21.188 "name": "Malloc1", 00:16:21.188 "nguid": "A78B944BE9C140B8A2BA9C0E1AA1D2B3", 00:16:21.188 "uuid": "a78b944b-e9c1-40b8-a2ba-9c0e1aa1d2b3" 00:16:21.188 }, 00:16:21.188 { 00:16:21.188 "nsid": 2, 00:16:21.188 "bdev_name": "Malloc3", 00:16:21.188 "name": "Malloc3", 00:16:21.188 "nguid": "2DF019FBDA10421CAFED9CF2FCE022AF", 00:16:21.188 "uuid": "2df019fb-da10-421c-afed-9cf2fce022af" 00:16:21.188 } 00:16:21.188 ] 00:16:21.188 }, 00:16:21.188 { 00:16:21.188 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:21.188 "subtype": "NVMe", 00:16:21.188 "listen_addresses": [ 00:16:21.188 { 00:16:21.188 "trtype": "VFIOUSER", 00:16:21.188 "adrfam": "IPv4", 00:16:21.188 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:21.188 "trsvcid": "0" 00:16:21.188 } 00:16:21.188 ], 00:16:21.188 "allow_any_host": true, 00:16:21.188 "hosts": [], 00:16:21.188 "serial_number": "SPDK2", 00:16:21.188 "model_number": "SPDK bdev Controller", 00:16:21.188 "max_namespaces": 32, 00:16:21.188 "min_cntlid": 1, 00:16:21.188 "max_cntlid": 65519, 00:16:21.188 "namespaces": [ 00:16:21.188 { 00:16:21.188 "nsid": 1, 00:16:21.188 "bdev_name": "Malloc2", 00:16:21.188 "name": "Malloc2", 00:16:21.188 "nguid": "DA7CC58D06704251B443A459CE40D530", 00:16:21.188 "uuid": "da7cc58d-0670-4251-b443-a459ce40d530" 00:16:21.188 }, 00:16:21.188 { 00:16:21.188 "nsid": 2, 00:16:21.188 "bdev_name": "Malloc4", 00:16:21.188 "name": "Malloc4", 00:16:21.188 "nguid": "05C665DE097848F6BA1CC68BAB2165A7", 00:16:21.188 "uuid": "05c665de-0978-48f6-ba1c-c68bab2165a7" 00:16:21.188 } 00:16:21.188 ] 00:16:21.188 } 00:16:21.188 ] 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2522677 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2516866 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2516866 ']' 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2516866 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2516866 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2516866' 00:16:21.188 killing process with pid 2516866 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2516866 00:16:21.188 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2516866 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2522851 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2522851' 00:16:21.756 Process pid: 2522851 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2522851 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2522851 ']' 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:21.756 10:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:21.756 [2024-12-09 10:26:53.959532] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:21.756 [2024-12-09 10:26:53.960623] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:16:21.756 [2024-12-09 10:26:53.960686] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:21.756 [2024-12-09 10:26:54.023842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:21.756 [2024-12-09 10:26:54.078025] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:21.756 [2024-12-09 10:26:54.078080] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:21.756 [2024-12-09 10:26:54.078108] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:21.756 [2024-12-09 10:26:54.078120] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:21.756 [2024-12-09 10:26:54.078130] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:21.756 [2024-12-09 10:26:54.079577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.756 [2024-12-09 10:26:54.079640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.756 [2024-12-09 10:26:54.079752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:21.756 [2024-12-09 10:26:54.079755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.756 [2024-12-09 10:26:54.164578] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:21.756 [2024-12-09 10:26:54.164779] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:21.756 [2024-12-09 10:26:54.165043] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:21.756 [2024-12-09 10:26:54.165752] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:21.756 [2024-12-09 10:26:54.165956] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:21.757 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.757 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:21.757 10:26:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:23.134 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:23.134 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:23.134 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:23.134 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:23.134 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:23.134 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:23.703 Malloc1 00:16:23.703 10:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:23.962 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:24.220 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:24.477 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:24.477 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:24.477 10:26:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:25.040 Malloc2 00:16:25.040 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:25.298 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:25.555 10:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2522851 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2522851 ']' 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2522851 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522851 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522851' 00:16:25.812 killing process with pid 2522851 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2522851 00:16:25.812 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2522851 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:26.070 00:16:26.070 real 0m54.694s 00:16:26.070 user 3m30.894s 00:16:26.070 sys 0m3.994s 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:26.070 ************************************ 00:16:26.070 END TEST nvmf_vfio_user 00:16:26.070 ************************************ 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:26.070 ************************************ 00:16:26.070 START TEST nvmf_vfio_user_nvme_compliance 00:16:26.070 ************************************ 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:26.070 * Looking for test storage... 00:16:26.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:16:26.070 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:26.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.327 --rc genhtml_branch_coverage=1 00:16:26.327 --rc genhtml_function_coverage=1 00:16:26.327 --rc genhtml_legend=1 00:16:26.327 --rc geninfo_all_blocks=1 00:16:26.327 --rc geninfo_unexecuted_blocks=1 00:16:26.327 00:16:26.327 ' 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:26.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.327 --rc genhtml_branch_coverage=1 00:16:26.327 --rc genhtml_function_coverage=1 00:16:26.327 --rc genhtml_legend=1 00:16:26.327 --rc geninfo_all_blocks=1 00:16:26.327 --rc geninfo_unexecuted_blocks=1 00:16:26.327 00:16:26.327 ' 00:16:26.327 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:26.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.328 --rc genhtml_branch_coverage=1 00:16:26.328 --rc genhtml_function_coverage=1 00:16:26.328 --rc genhtml_legend=1 00:16:26.328 --rc geninfo_all_blocks=1 00:16:26.328 --rc geninfo_unexecuted_blocks=1 00:16:26.328 00:16:26.328 ' 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:26.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:26.328 --rc genhtml_branch_coverage=1 00:16:26.328 --rc genhtml_function_coverage=1 00:16:26.328 --rc genhtml_legend=1 00:16:26.328 --rc geninfo_all_blocks=1 00:16:26.328 --rc geninfo_unexecuted_blocks=1 00:16:26.328 00:16:26.328 ' 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:26.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2523464 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2523464' 00:16:26.328 Process pid: 2523464 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2523464 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2523464 ']' 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.328 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:26.328 [2024-12-09 10:26:58.648090] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:16:26.328 [2024-12-09 10:26:58.648218] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.328 [2024-12-09 10:26:58.713355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:26.585 [2024-12-09 10:26:58.773201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.585 [2024-12-09 10:26:58.773269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.585 [2024-12-09 10:26:58.773298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.585 [2024-12-09 10:26:58.773310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.585 [2024-12-09 10:26:58.773320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.585 [2024-12-09 10:26:58.774835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.585 [2024-12-09 10:26:58.774899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.585 [2024-12-09 10:26:58.774903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.585 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.585 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:16:26.585 10:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:27.516 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:27.516 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:27.516 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:27.516 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.516 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.516 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.516 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:27.516 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:27.516 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.516 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.516 malloc0 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.773 10:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:27.773 00:16:27.773 00:16:27.773 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.773 http://cunit.sourceforge.net/ 00:16:27.773 00:16:27.773 00:16:27.773 Suite: nvme_compliance 00:16:27.773 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 10:27:00.183794] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:27.773 [2024-12-09 10:27:00.185355] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:27.773 [2024-12-09 10:27:00.185380] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:27.773 [2024-12-09 10:27:00.185399] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:27.774 [2024-12-09 10:27:00.186812] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.031 passed 00:16:28.031 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 10:27:00.274546] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.031 [2024-12-09 10:27:00.277561] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.031 passed 00:16:28.031 Test: admin_identify_ns ...[2024-12-09 10:27:00.366885] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.031 [2024-12-09 10:27:00.429163] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:28.031 [2024-12-09 10:27:00.437164] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:28.031 [2024-12-09 10:27:00.458288] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.289 passed 00:16:28.289 Test: admin_get_features_mandatory_features ...[2024-12-09 10:27:00.543949] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.289 [2024-12-09 10:27:00.546969] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.289 passed 00:16:28.289 Test: admin_get_features_optional_features ...[2024-12-09 10:27:00.632562] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.289 [2024-12-09 10:27:00.635585] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.289 passed 00:16:28.289 Test: admin_set_features_number_of_queues ...[2024-12-09 10:27:00.721736] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.546 [2024-12-09 10:27:00.824351] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.546 passed 00:16:28.546 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 10:27:00.905098] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.546 [2024-12-09 10:27:00.910157] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.546 passed 00:16:28.803 Test: admin_get_log_page_with_lpo ...[2024-12-09 10:27:00.993450] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.803 [2024-12-09 10:27:01.060161] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:28.803 [2024-12-09 10:27:01.073233] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.803 passed 00:16:28.803 Test: fabric_property_get ...[2024-12-09 10:27:01.158654] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:28.803 [2024-12-09 10:27:01.159936] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:28.803 [2024-12-09 10:27:01.161672] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:28.803 passed 00:16:29.060 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 10:27:01.246213] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.060 [2024-12-09 10:27:01.247550] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:29.060 [2024-12-09 10:27:01.249234] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.060 passed 00:16:29.060 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 10:27:01.331693] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.060 [2024-12-09 10:27:01.418150] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:29.060 [2024-12-09 10:27:01.434167] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:29.060 [2024-12-09 10:27:01.436205] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.060 passed 00:16:29.318 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 10:27:01.522616] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.318 [2024-12-09 10:27:01.523937] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:29.318 [2024-12-09 10:27:01.525642] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.318 passed 00:16:29.318 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 10:27:01.606896] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.318 [2024-12-09 10:27:01.681179] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:29.318 [2024-12-09 10:27:01.705149] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:29.318 [2024-12-09 10:27:01.710283] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.318 passed 00:16:29.575 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 10:27:01.795183] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.575 [2024-12-09 10:27:01.796577] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:29.575 [2024-12-09 10:27:01.796619] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:29.575 [2024-12-09 10:27:01.798219] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.575 passed 00:16:29.576 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 10:27:01.881718] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.576 [2024-12-09 10:27:01.974150] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:29.576 [2024-12-09 10:27:01.982147] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:29.576 [2024-12-09 10:27:01.990151] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:29.576 [2024-12-09 10:27:01.998149] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:29.834 [2024-12-09 10:27:02.027263] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.834 passed 00:16:29.834 Test: admin_create_io_sq_verify_pc ...[2024-12-09 10:27:02.111290] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:29.834 [2024-12-09 10:27:02.129165] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:29.834 [2024-12-09 10:27:02.146658] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:29.834 passed 00:16:29.834 Test: admin_create_io_qp_max_qps ...[2024-12-09 10:27:02.230220] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.207 [2024-12-09 10:27:03.327158] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:31.465 [2024-12-09 10:27:03.705759] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.465 passed 00:16:31.465 Test: admin_create_io_sq_shared_cq ...[2024-12-09 10:27:03.789167] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:31.722 [2024-12-09 10:27:03.923162] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:31.722 [2024-12-09 10:27:03.960256] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:31.722 passed 00:16:31.722 00:16:31.722 Run Summary: Type Total Ran Passed Failed Inactive 00:16:31.722 suites 1 1 n/a 0 0 00:16:31.722 tests 18 18 18 0 0 00:16:31.722 asserts 360 360 360 0 n/a 00:16:31.722 00:16:31.722 Elapsed time = 1.562 seconds 00:16:31.722 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2523464 00:16:31.722 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2523464 ']' 00:16:31.722 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2523464 00:16:31.722 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:16:31.722 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:31.722 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2523464 00:16:31.722 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:31.722 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:31.722 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2523464' 00:16:31.722 killing process with pid 2523464 00:16:31.722 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2523464 00:16:31.722 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2523464 00:16:31.980 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:31.980 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:31.980 00:16:31.980 real 0m5.907s 00:16:31.980 user 0m16.473s 00:16:31.980 sys 0m0.571s 00:16:31.981 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.981 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:31.981 ************************************ 00:16:31.981 END TEST nvmf_vfio_user_nvme_compliance 00:16:31.981 ************************************ 00:16:31.981 10:27:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:31.981 10:27:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:31.981 10:27:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.981 10:27:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.981 ************************************ 00:16:31.981 START TEST nvmf_vfio_user_fuzz 00:16:31.981 ************************************ 00:16:31.981 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:32.239 * Looking for test storage... 00:16:32.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:32.239 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:32.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.240 --rc genhtml_branch_coverage=1 00:16:32.240 --rc genhtml_function_coverage=1 00:16:32.240 --rc genhtml_legend=1 00:16:32.240 --rc geninfo_all_blocks=1 00:16:32.240 --rc geninfo_unexecuted_blocks=1 00:16:32.240 00:16:32.240 ' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:32.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.240 --rc genhtml_branch_coverage=1 00:16:32.240 --rc genhtml_function_coverage=1 00:16:32.240 --rc genhtml_legend=1 00:16:32.240 --rc geninfo_all_blocks=1 00:16:32.240 --rc geninfo_unexecuted_blocks=1 00:16:32.240 00:16:32.240 ' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:32.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.240 --rc genhtml_branch_coverage=1 00:16:32.240 --rc genhtml_function_coverage=1 00:16:32.240 --rc genhtml_legend=1 00:16:32.240 --rc geninfo_all_blocks=1 00:16:32.240 --rc geninfo_unexecuted_blocks=1 00:16:32.240 00:16:32.240 ' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:32.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:32.240 --rc genhtml_branch_coverage=1 00:16:32.240 --rc genhtml_function_coverage=1 00:16:32.240 --rc genhtml_legend=1 00:16:32.240 --rc geninfo_all_blocks=1 00:16:32.240 --rc geninfo_unexecuted_blocks=1 00:16:32.240 00:16:32.240 ' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:32.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2524309 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2524309' 00:16:32.240 Process pid: 2524309 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2524309 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2524309 ']' 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.240 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:32.498 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.498 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:16:32.498 10:27:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.875 malloc0 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:33.875 10:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:06.025 Fuzzing completed. Shutting down the fuzz application 00:17:06.025 00:17:06.025 Dumping successful admin opcodes: 00:17:06.025 9, 10, 00:17:06.025 Dumping successful io opcodes: 00:17:06.025 0, 00:17:06.025 NS: 0x20000081ef00 I/O qp, Total commands completed: 657732, total successful commands: 2559, random_seed: 1719683904 00:17:06.025 NS: 0x20000081ef00 admin qp, Total commands completed: 103472, total successful commands: 25, random_seed: 3844567424 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2524309 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2524309 ']' 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2524309 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2524309 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2524309' 00:17:06.025 killing process with pid 2524309 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2524309 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2524309 00:17:06.025 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:06.026 00:17:06.026 real 0m32.329s 00:17:06.026 user 0m30.876s 00:17:06.026 sys 0m29.278s 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:06.026 ************************************ 00:17:06.026 END TEST nvmf_vfio_user_fuzz 00:17:06.026 ************************************ 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:06.026 ************************************ 00:17:06.026 START TEST nvmf_auth_target 00:17:06.026 ************************************ 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:06.026 * Looking for test storage... 00:17:06.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:06.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.026 --rc genhtml_branch_coverage=1 00:17:06.026 --rc genhtml_function_coverage=1 00:17:06.026 --rc genhtml_legend=1 00:17:06.026 --rc geninfo_all_blocks=1 00:17:06.026 --rc geninfo_unexecuted_blocks=1 00:17:06.026 00:17:06.026 ' 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:06.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.026 --rc genhtml_branch_coverage=1 00:17:06.026 --rc genhtml_function_coverage=1 00:17:06.026 --rc genhtml_legend=1 00:17:06.026 --rc geninfo_all_blocks=1 00:17:06.026 --rc geninfo_unexecuted_blocks=1 00:17:06.026 00:17:06.026 ' 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:06.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.026 --rc genhtml_branch_coverage=1 00:17:06.026 --rc genhtml_function_coverage=1 00:17:06.026 --rc genhtml_legend=1 00:17:06.026 --rc geninfo_all_blocks=1 00:17:06.026 --rc geninfo_unexecuted_blocks=1 00:17:06.026 00:17:06.026 ' 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:06.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:06.026 --rc genhtml_branch_coverage=1 00:17:06.026 --rc genhtml_function_coverage=1 00:17:06.026 --rc genhtml_legend=1 00:17:06.026 --rc geninfo_all_blocks=1 00:17:06.026 --rc geninfo_unexecuted_blocks=1 00:17:06.026 00:17:06.026 ' 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:06.026 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:06.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:06.027 10:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.594 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.594 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:06.594 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:06.594 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:06.594 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:06.594 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:06.594 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:06.595 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:06.595 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:06.595 Found net devices under 0000:09:00.0: cvl_0_0 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:06.595 Found net devices under 0000:09:00.1: cvl_0_1 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:06.595 10:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.595 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.595 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.595 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:06.853 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.853 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:17:06.853 00:17:06.853 --- 10.0.0.2 ping statistics --- 00:17:06.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.853 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.853 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.853 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:17:06.853 00:17:06.853 --- 10.0.0.1 ping statistics --- 00:17:06.853 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.853 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2530271 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2530271 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2530271 ']' 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.853 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2530292 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f72c279b6f6ad4e84bf9474f02fa3d3212b5f64f30c2a78d 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.iED 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f72c279b6f6ad4e84bf9474f02fa3d3212b5f64f30c2a78d 0 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f72c279b6f6ad4e84bf9474f02fa3d3212b5f64f30c2a78d 0 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f72c279b6f6ad4e84bf9474f02fa3d3212b5f64f30c2a78d 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.iED 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.iED 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.iED 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8cae6701afa3a956a984f7d7b5953c0fe29f2d4c633b785ad8242c5f036927f8 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.CQW 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8cae6701afa3a956a984f7d7b5953c0fe29f2d4c633b785ad8242c5f036927f8 3 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8cae6701afa3a956a984f7d7b5953c0fe29f2d4c633b785ad8242c5f036927f8 3 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8cae6701afa3a956a984f7d7b5953c0fe29f2d4c633b785ad8242c5f036927f8 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.CQW 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.CQW 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.CQW 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=09450486cb750c52a7c9ac91c1feeebd 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Rwi 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 09450486cb750c52a7c9ac91c1feeebd 1 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 09450486cb750c52a7c9ac91c1feeebd 1 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=09450486cb750c52a7c9ac91c1feeebd 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Rwi 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Rwi 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.Rwi 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:07.112 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ac075f9c224578142633b30ff9e01c89ca10751509853c43 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.BuA 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ac075f9c224578142633b30ff9e01c89ca10751509853c43 2 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ac075f9c224578142633b30ff9e01c89ca10751509853c43 2 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ac075f9c224578142633b30ff9e01c89ca10751509853c43 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.BuA 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.BuA 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.BuA 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e1af7ee743ad6111df8950d4c6d8f21c20fd782f52341eaa 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Omv 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e1af7ee743ad6111df8950d4c6d8f21c20fd782f52341eaa 2 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e1af7ee743ad6111df8950d4c6d8f21c20fd782f52341eaa 2 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e1af7ee743ad6111df8950d4c6d8f21c20fd782f52341eaa 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Omv 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Omv 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Omv 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=98ce0532cdbbe71e8bfaae0a4b25ea42 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RtE 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 98ce0532cdbbe71e8bfaae0a4b25ea42 1 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 98ce0532cdbbe71e8bfaae0a4b25ea42 1 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=98ce0532cdbbe71e8bfaae0a4b25ea42 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RtE 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RtE 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.RtE 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2bf0525ef32544c5b6623af8a1098ebb025b540cc6c479e1f00b9405a3fcf785 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.8EA 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2bf0525ef32544c5b6623af8a1098ebb025b540cc6c479e1f00b9405a3fcf785 3 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2bf0525ef32544c5b6623af8a1098ebb025b540cc6c479e1f00b9405a3fcf785 3 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2bf0525ef32544c5b6623af8a1098ebb025b540cc6c479e1f00b9405a3fcf785 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.8EA 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.8EA 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.8EA 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2530271 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2530271 ']' 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.371 10:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.629 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.629 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:07.629 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2530292 /var/tmp/host.sock 00:17:07.629 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2530292 ']' 00:17:07.629 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:07.629 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.629 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:07.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:07.629 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.629 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.886 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.886 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:07.886 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:07.886 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.886 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.144 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.144 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:08.144 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iED 00:17:08.144 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.144 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.144 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.144 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.iED 00:17:08.144 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.iED 00:17:08.402 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.CQW ]] 00:17:08.402 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CQW 00:17:08.402 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.402 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.402 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.402 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CQW 00:17:08.402 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CQW 00:17:08.660 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:08.660 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Rwi 00:17:08.660 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.660 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.660 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.660 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Rwi 00:17:08.660 10:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Rwi 00:17:08.918 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.BuA ]] 00:17:08.918 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BuA 00:17:08.918 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.918 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.918 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.918 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BuA 00:17:08.918 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BuA 00:17:09.176 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:09.176 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Omv 00:17:09.176 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.176 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.176 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.176 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Omv 00:17:09.176 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Omv 00:17:09.434 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.RtE ]] 00:17:09.434 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RtE 00:17:09.434 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.434 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.434 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.434 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RtE 00:17:09.434 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RtE 00:17:09.692 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:09.692 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8EA 00:17:09.692 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.692 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.692 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.692 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.8EA 00:17:09.692 10:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.8EA 00:17:09.949 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:09.949 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:09.949 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.949 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.949 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:09.949 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.207 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.465 00:17:10.465 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.465 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.465 10:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.723 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.723 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.723 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.723 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.723 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.723 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.723 { 00:17:10.723 "cntlid": 1, 00:17:10.723 "qid": 0, 00:17:10.723 "state": "enabled", 00:17:10.723 "thread": "nvmf_tgt_poll_group_000", 00:17:10.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:10.723 "listen_address": { 00:17:10.723 "trtype": "TCP", 00:17:10.723 "adrfam": "IPv4", 00:17:10.723 "traddr": "10.0.0.2", 00:17:10.723 "trsvcid": "4420" 00:17:10.723 }, 00:17:10.723 "peer_address": { 00:17:10.723 "trtype": "TCP", 00:17:10.723 "adrfam": "IPv4", 00:17:10.723 "traddr": "10.0.0.1", 00:17:10.723 "trsvcid": "54488" 00:17:10.723 }, 00:17:10.723 "auth": { 00:17:10.723 "state": "completed", 00:17:10.723 "digest": "sha256", 00:17:10.723 "dhgroup": "null" 00:17:10.723 } 00:17:10.723 } 00:17:10.723 ]' 00:17:10.723 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.981 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.981 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.981 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:10.981 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.981 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.981 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.981 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.240 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:11.240 10:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:12.173 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.173 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:12.173 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.173 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.173 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.173 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.173 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:12.173 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.431 10:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.688 00:17:12.688 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.688 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.688 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.945 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.945 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.945 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.945 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.945 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.945 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.945 { 00:17:12.945 "cntlid": 3, 00:17:12.945 "qid": 0, 00:17:12.945 "state": "enabled", 00:17:12.945 "thread": "nvmf_tgt_poll_group_000", 00:17:12.945 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:12.945 "listen_address": { 00:17:12.945 "trtype": "TCP", 00:17:12.945 "adrfam": "IPv4", 00:17:12.945 "traddr": "10.0.0.2", 00:17:12.945 "trsvcid": "4420" 00:17:12.945 }, 00:17:12.945 "peer_address": { 00:17:12.945 "trtype": "TCP", 00:17:12.945 "adrfam": "IPv4", 00:17:12.945 "traddr": "10.0.0.1", 00:17:12.945 "trsvcid": "54508" 00:17:12.945 }, 00:17:12.945 "auth": { 00:17:12.945 "state": "completed", 00:17:12.945 "digest": "sha256", 00:17:12.945 "dhgroup": "null" 00:17:12.945 } 00:17:12.945 } 00:17:12.945 ]' 00:17:12.945 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.945 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.945 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.201 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:13.201 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.201 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.201 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.201 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.458 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:17:13.458 10:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:17:14.389 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.389 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:14.389 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.389 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.389 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.389 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.389 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.389 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.646 10:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.903 00:17:14.903 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.903 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.903 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.160 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.160 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.160 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.160 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.160 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.160 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.160 { 00:17:15.160 "cntlid": 5, 00:17:15.160 "qid": 0, 00:17:15.160 "state": "enabled", 00:17:15.160 "thread": "nvmf_tgt_poll_group_000", 00:17:15.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:15.160 "listen_address": { 00:17:15.160 "trtype": "TCP", 00:17:15.160 "adrfam": "IPv4", 00:17:15.160 "traddr": "10.0.0.2", 00:17:15.160 "trsvcid": "4420" 00:17:15.160 }, 00:17:15.160 "peer_address": { 00:17:15.160 "trtype": "TCP", 00:17:15.160 "adrfam": "IPv4", 00:17:15.160 "traddr": "10.0.0.1", 00:17:15.160 "trsvcid": "54550" 00:17:15.160 }, 00:17:15.160 "auth": { 00:17:15.160 "state": "completed", 00:17:15.160 "digest": "sha256", 00:17:15.160 "dhgroup": "null" 00:17:15.160 } 00:17:15.160 } 00:17:15.160 ]' 00:17:15.160 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.160 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.160 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.417 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:15.417 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.417 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.417 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.417 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.674 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:17:15.674 10:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:17:16.608 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.608 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:16.608 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.608 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.608 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.608 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.608 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.608 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.866 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.124 00:17:17.124 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.124 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.124 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.382 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.382 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.382 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.382 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.382 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.382 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.382 { 00:17:17.382 "cntlid": 7, 00:17:17.382 "qid": 0, 00:17:17.382 "state": "enabled", 00:17:17.382 "thread": "nvmf_tgt_poll_group_000", 00:17:17.382 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:17.382 "listen_address": { 00:17:17.382 "trtype": "TCP", 00:17:17.382 "adrfam": "IPv4", 00:17:17.382 "traddr": "10.0.0.2", 00:17:17.382 "trsvcid": "4420" 00:17:17.382 }, 00:17:17.382 "peer_address": { 00:17:17.382 "trtype": "TCP", 00:17:17.382 "adrfam": "IPv4", 00:17:17.382 "traddr": "10.0.0.1", 00:17:17.382 "trsvcid": "54582" 00:17:17.382 }, 00:17:17.382 "auth": { 00:17:17.382 "state": "completed", 00:17:17.382 "digest": "sha256", 00:17:17.382 "dhgroup": "null" 00:17:17.382 } 00:17:17.382 } 00:17:17.382 ]' 00:17:17.382 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.382 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.382 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.640 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:17.640 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.641 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.641 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.641 10:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.898 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:17:17.898 10:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:17:18.829 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.830 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:18.830 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.830 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.830 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.830 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.830 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.830 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:18.830 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:19.086 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:19.086 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.086 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:19.086 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:19.086 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:19.087 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.087 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.087 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.087 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.087 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.087 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.087 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.087 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:19.344 00:17:19.344 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.344 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.344 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.603 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.603 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.603 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.603 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.603 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.603 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.603 { 00:17:19.603 "cntlid": 9, 00:17:19.603 "qid": 0, 00:17:19.603 "state": "enabled", 00:17:19.603 "thread": "nvmf_tgt_poll_group_000", 00:17:19.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:19.603 "listen_address": { 00:17:19.603 "trtype": "TCP", 00:17:19.603 "adrfam": "IPv4", 00:17:19.603 "traddr": "10.0.0.2", 00:17:19.603 "trsvcid": "4420" 00:17:19.603 }, 00:17:19.603 "peer_address": { 00:17:19.603 "trtype": "TCP", 00:17:19.603 "adrfam": "IPv4", 00:17:19.603 "traddr": "10.0.0.1", 00:17:19.603 "trsvcid": "36474" 00:17:19.603 }, 00:17:19.603 "auth": { 00:17:19.603 "state": "completed", 00:17:19.603 "digest": "sha256", 00:17:19.603 "dhgroup": "ffdhe2048" 00:17:19.603 } 00:17:19.603 } 00:17:19.603 ]' 00:17:19.603 10:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.603 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.603 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.862 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:19.862 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.862 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.862 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.862 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.120 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:20.120 10:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:21.076 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.076 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.076 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:21.076 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.076 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.076 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.076 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.076 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:21.076 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:21.333 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:21.333 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.333 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:21.333 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:21.333 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:21.333 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.333 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.333 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.333 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.333 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.333 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.334 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.334 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:21.591 00:17:21.591 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.591 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.591 10:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.848 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.848 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.848 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.848 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.848 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.848 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.848 { 00:17:21.848 "cntlid": 11, 00:17:21.848 "qid": 0, 00:17:21.848 "state": "enabled", 00:17:21.848 "thread": "nvmf_tgt_poll_group_000", 00:17:21.848 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:21.848 "listen_address": { 00:17:21.848 "trtype": "TCP", 00:17:21.848 "adrfam": "IPv4", 00:17:21.848 "traddr": "10.0.0.2", 00:17:21.848 "trsvcid": "4420" 00:17:21.848 }, 00:17:21.848 "peer_address": { 00:17:21.848 "trtype": "TCP", 00:17:21.848 "adrfam": "IPv4", 00:17:21.848 "traddr": "10.0.0.1", 00:17:21.848 "trsvcid": "36500" 00:17:21.848 }, 00:17:21.848 "auth": { 00:17:21.848 "state": "completed", 00:17:21.848 "digest": "sha256", 00:17:21.848 "dhgroup": "ffdhe2048" 00:17:21.848 } 00:17:21.848 } 00:17:21.848 ]' 00:17:21.848 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.848 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:21.848 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.848 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:21.848 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.106 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.106 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.106 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.364 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:17:22.364 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.295 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:23.860 00:17:23.860 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.860 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.860 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.117 { 00:17:24.117 "cntlid": 13, 00:17:24.117 "qid": 0, 00:17:24.117 "state": "enabled", 00:17:24.117 "thread": "nvmf_tgt_poll_group_000", 00:17:24.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:24.117 "listen_address": { 00:17:24.117 "trtype": "TCP", 00:17:24.117 "adrfam": "IPv4", 00:17:24.117 "traddr": "10.0.0.2", 00:17:24.117 "trsvcid": "4420" 00:17:24.117 }, 00:17:24.117 "peer_address": { 00:17:24.117 "trtype": "TCP", 00:17:24.117 "adrfam": "IPv4", 00:17:24.117 "traddr": "10.0.0.1", 00:17:24.117 "trsvcid": "36528" 00:17:24.117 }, 00:17:24.117 "auth": { 00:17:24.117 "state": "completed", 00:17:24.117 "digest": "sha256", 00:17:24.117 "dhgroup": "ffdhe2048" 00:17:24.117 } 00:17:24.117 } 00:17:24.117 ]' 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.117 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.374 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:17:24.374 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:17:25.307 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.307 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:25.307 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.307 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.307 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.307 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.307 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.307 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.565 10:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:25.823 00:17:25.823 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.823 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.823 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.080 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.080 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.080 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.080 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.080 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.080 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.080 { 00:17:26.080 "cntlid": 15, 00:17:26.080 "qid": 0, 00:17:26.080 "state": "enabled", 00:17:26.080 "thread": "nvmf_tgt_poll_group_000", 00:17:26.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:26.080 "listen_address": { 00:17:26.080 "trtype": "TCP", 00:17:26.080 "adrfam": "IPv4", 00:17:26.080 "traddr": "10.0.0.2", 00:17:26.080 "trsvcid": "4420" 00:17:26.080 }, 00:17:26.080 "peer_address": { 00:17:26.080 "trtype": "TCP", 00:17:26.080 "adrfam": "IPv4", 00:17:26.080 "traddr": "10.0.0.1", 00:17:26.080 "trsvcid": "36566" 00:17:26.080 }, 00:17:26.080 "auth": { 00:17:26.080 "state": "completed", 00:17:26.080 "digest": "sha256", 00:17:26.080 "dhgroup": "ffdhe2048" 00:17:26.080 } 00:17:26.080 } 00:17:26.080 ]' 00:17:26.080 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.337 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:26.337 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.337 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:26.337 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.337 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.337 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.337 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.594 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:17:26.594 10:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:17:27.527 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.527 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.527 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:27.527 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.527 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.527 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.527 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.527 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.527 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.527 10:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.786 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:28.044 00:17:28.044 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.044 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.044 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.362 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.362 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.362 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.362 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.362 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.362 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.362 { 00:17:28.362 "cntlid": 17, 00:17:28.362 "qid": 0, 00:17:28.362 "state": "enabled", 00:17:28.362 "thread": "nvmf_tgt_poll_group_000", 00:17:28.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:28.362 "listen_address": { 00:17:28.362 "trtype": "TCP", 00:17:28.362 "adrfam": "IPv4", 00:17:28.362 "traddr": "10.0.0.2", 00:17:28.362 "trsvcid": "4420" 00:17:28.362 }, 00:17:28.362 "peer_address": { 00:17:28.362 "trtype": "TCP", 00:17:28.362 "adrfam": "IPv4", 00:17:28.362 "traddr": "10.0.0.1", 00:17:28.362 "trsvcid": "36592" 00:17:28.362 }, 00:17:28.362 "auth": { 00:17:28.362 "state": "completed", 00:17:28.362 "digest": "sha256", 00:17:28.362 "dhgroup": "ffdhe3072" 00:17:28.362 } 00:17:28.362 } 00:17:28.362 ]' 00:17:28.362 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.639 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.639 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.639 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.639 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.639 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.639 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.639 10:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.897 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:28.897 10:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:29.828 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.828 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:29.828 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.828 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.828 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.828 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.828 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:29.828 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.138 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.395 00:17:30.395 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.395 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.395 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.652 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.652 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.652 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.652 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.652 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.652 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.652 { 00:17:30.652 "cntlid": 19, 00:17:30.652 "qid": 0, 00:17:30.652 "state": "enabled", 00:17:30.652 "thread": "nvmf_tgt_poll_group_000", 00:17:30.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:30.652 "listen_address": { 00:17:30.652 "trtype": "TCP", 00:17:30.652 "adrfam": "IPv4", 00:17:30.652 "traddr": "10.0.0.2", 00:17:30.652 "trsvcid": "4420" 00:17:30.652 }, 00:17:30.652 "peer_address": { 00:17:30.652 "trtype": "TCP", 00:17:30.652 "adrfam": "IPv4", 00:17:30.652 "traddr": "10.0.0.1", 00:17:30.652 "trsvcid": "37392" 00:17:30.652 }, 00:17:30.652 "auth": { 00:17:30.652 "state": "completed", 00:17:30.652 "digest": "sha256", 00:17:30.652 "dhgroup": "ffdhe3072" 00:17:30.652 } 00:17:30.652 } 00:17:30.652 ]' 00:17:30.652 10:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.652 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.652 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.652 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.652 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.652 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.652 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.652 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.215 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:17:31.215 10:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.144 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.145 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.710 00:17:32.710 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.710 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.710 10:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.967 { 00:17:32.967 "cntlid": 21, 00:17:32.967 "qid": 0, 00:17:32.967 "state": "enabled", 00:17:32.967 "thread": "nvmf_tgt_poll_group_000", 00:17:32.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:32.967 "listen_address": { 00:17:32.967 "trtype": "TCP", 00:17:32.967 "adrfam": "IPv4", 00:17:32.967 "traddr": "10.0.0.2", 00:17:32.967 "trsvcid": "4420" 00:17:32.967 }, 00:17:32.967 "peer_address": { 00:17:32.967 "trtype": "TCP", 00:17:32.967 "adrfam": "IPv4", 00:17:32.967 "traddr": "10.0.0.1", 00:17:32.967 "trsvcid": "37412" 00:17:32.967 }, 00:17:32.967 "auth": { 00:17:32.967 "state": "completed", 00:17:32.967 "digest": "sha256", 00:17:32.967 "dhgroup": "ffdhe3072" 00:17:32.967 } 00:17:32.967 } 00:17:32.967 ]' 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.967 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.224 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:17:33.224 10:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:17:34.153 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.153 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:34.153 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.153 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.153 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.153 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.153 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.153 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:34.410 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:34.410 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.410 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:34.410 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:34.410 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.411 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.411 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:34.411 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.411 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.411 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.411 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.411 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.411 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.976 00:17:34.976 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.976 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.976 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.976 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.976 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.976 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.976 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.233 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.233 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.233 { 00:17:35.233 "cntlid": 23, 00:17:35.233 "qid": 0, 00:17:35.233 "state": "enabled", 00:17:35.233 "thread": "nvmf_tgt_poll_group_000", 00:17:35.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:35.233 "listen_address": { 00:17:35.233 "trtype": "TCP", 00:17:35.233 "adrfam": "IPv4", 00:17:35.233 "traddr": "10.0.0.2", 00:17:35.233 "trsvcid": "4420" 00:17:35.233 }, 00:17:35.233 "peer_address": { 00:17:35.233 "trtype": "TCP", 00:17:35.233 "adrfam": "IPv4", 00:17:35.233 "traddr": "10.0.0.1", 00:17:35.233 "trsvcid": "37438" 00:17:35.233 }, 00:17:35.233 "auth": { 00:17:35.233 "state": "completed", 00:17:35.233 "digest": "sha256", 00:17:35.233 "dhgroup": "ffdhe3072" 00:17:35.233 } 00:17:35.233 } 00:17:35.233 ]' 00:17:35.233 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.233 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.233 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.233 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:35.233 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.233 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.233 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.233 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.491 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:17:35.491 10:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:17:36.424 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.424 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:36.424 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.424 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.424 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.424 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:36.424 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.424 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.424 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.682 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.940 00:17:37.199 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.199 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.199 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.456 { 00:17:37.456 "cntlid": 25, 00:17:37.456 "qid": 0, 00:17:37.456 "state": "enabled", 00:17:37.456 "thread": "nvmf_tgt_poll_group_000", 00:17:37.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:37.456 "listen_address": { 00:17:37.456 "trtype": "TCP", 00:17:37.456 "adrfam": "IPv4", 00:17:37.456 "traddr": "10.0.0.2", 00:17:37.456 "trsvcid": "4420" 00:17:37.456 }, 00:17:37.456 "peer_address": { 00:17:37.456 "trtype": "TCP", 00:17:37.456 "adrfam": "IPv4", 00:17:37.456 "traddr": "10.0.0.1", 00:17:37.456 "trsvcid": "37450" 00:17:37.456 }, 00:17:37.456 "auth": { 00:17:37.456 "state": "completed", 00:17:37.456 "digest": "sha256", 00:17:37.456 "dhgroup": "ffdhe4096" 00:17:37.456 } 00:17:37.456 } 00:17:37.456 ]' 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.456 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.714 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:37.714 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:38.646 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.646 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:38.646 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.647 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.647 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.647 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.647 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.647 10:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:38.904 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.481 00:17:39.481 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.481 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.481 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.739 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.739 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.739 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.739 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.739 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.739 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.739 { 00:17:39.739 "cntlid": 27, 00:17:39.739 "qid": 0, 00:17:39.739 "state": "enabled", 00:17:39.739 "thread": "nvmf_tgt_poll_group_000", 00:17:39.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:39.739 "listen_address": { 00:17:39.739 "trtype": "TCP", 00:17:39.739 "adrfam": "IPv4", 00:17:39.739 "traddr": "10.0.0.2", 00:17:39.739 "trsvcid": "4420" 00:17:39.739 }, 00:17:39.739 "peer_address": { 00:17:39.739 "trtype": "TCP", 00:17:39.739 "adrfam": "IPv4", 00:17:39.739 "traddr": "10.0.0.1", 00:17:39.739 "trsvcid": "50616" 00:17:39.739 }, 00:17:39.739 "auth": { 00:17:39.739 "state": "completed", 00:17:39.739 "digest": "sha256", 00:17:39.739 "dhgroup": "ffdhe4096" 00:17:39.739 } 00:17:39.739 } 00:17:39.739 ]' 00:17:39.739 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.739 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.739 10:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.739 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:39.739 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.739 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.739 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.739 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.996 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:17:39.996 10:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:17:40.927 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.927 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:40.927 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.927 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.927 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.927 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.927 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:40.927 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.185 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:41.443 00:17:41.443 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.443 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.443 10:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.009 { 00:17:42.009 "cntlid": 29, 00:17:42.009 "qid": 0, 00:17:42.009 "state": "enabled", 00:17:42.009 "thread": "nvmf_tgt_poll_group_000", 00:17:42.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:42.009 "listen_address": { 00:17:42.009 "trtype": "TCP", 00:17:42.009 "adrfam": "IPv4", 00:17:42.009 "traddr": "10.0.0.2", 00:17:42.009 "trsvcid": "4420" 00:17:42.009 }, 00:17:42.009 "peer_address": { 00:17:42.009 "trtype": "TCP", 00:17:42.009 "adrfam": "IPv4", 00:17:42.009 "traddr": "10.0.0.1", 00:17:42.009 "trsvcid": "50632" 00:17:42.009 }, 00:17:42.009 "auth": { 00:17:42.009 "state": "completed", 00:17:42.009 "digest": "sha256", 00:17:42.009 "dhgroup": "ffdhe4096" 00:17:42.009 } 00:17:42.009 } 00:17:42.009 ]' 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.009 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.267 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:17:42.267 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:17:43.200 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.200 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.200 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:43.200 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.200 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.200 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.200 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.200 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.200 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.457 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:43.715 00:17:43.715 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.715 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.715 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.974 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.974 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.974 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.974 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.974 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.974 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.974 { 00:17:43.974 "cntlid": 31, 00:17:43.974 "qid": 0, 00:17:43.974 "state": "enabled", 00:17:43.974 "thread": "nvmf_tgt_poll_group_000", 00:17:43.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:43.974 "listen_address": { 00:17:43.974 "trtype": "TCP", 00:17:43.974 "adrfam": "IPv4", 00:17:43.974 "traddr": "10.0.0.2", 00:17:43.974 "trsvcid": "4420" 00:17:43.974 }, 00:17:43.974 "peer_address": { 00:17:43.974 "trtype": "TCP", 00:17:43.974 "adrfam": "IPv4", 00:17:43.974 "traddr": "10.0.0.1", 00:17:43.974 "trsvcid": "50654" 00:17:43.974 }, 00:17:43.974 "auth": { 00:17:43.974 "state": "completed", 00:17:43.974 "digest": "sha256", 00:17:43.974 "dhgroup": "ffdhe4096" 00:17:43.974 } 00:17:43.974 } 00:17:43.974 ]' 00:17:43.974 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.232 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.232 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.232 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.232 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.232 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.232 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.232 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.490 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:17:44.490 10:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:17:45.424 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.424 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:45.424 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.424 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.424 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.424 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.424 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.424 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:45.424 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:45.682 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.248 00:17:46.248 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.248 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.248 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.506 { 00:17:46.506 "cntlid": 33, 00:17:46.506 "qid": 0, 00:17:46.506 "state": "enabled", 00:17:46.506 "thread": "nvmf_tgt_poll_group_000", 00:17:46.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:46.506 "listen_address": { 00:17:46.506 "trtype": "TCP", 00:17:46.506 "adrfam": "IPv4", 00:17:46.506 "traddr": "10.0.0.2", 00:17:46.506 "trsvcid": "4420" 00:17:46.506 }, 00:17:46.506 "peer_address": { 00:17:46.506 "trtype": "TCP", 00:17:46.506 "adrfam": "IPv4", 00:17:46.506 "traddr": "10.0.0.1", 00:17:46.506 "trsvcid": "50688" 00:17:46.506 }, 00:17:46.506 "auth": { 00:17:46.506 "state": "completed", 00:17:46.506 "digest": "sha256", 00:17:46.506 "dhgroup": "ffdhe6144" 00:17:46.506 } 00:17:46.506 } 00:17:46.506 ]' 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.506 10:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.764 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:46.764 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:47.697 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.697 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:47.697 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.697 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.697 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.697 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.697 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:47.697 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.260 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.518 00:17:48.518 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.518 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.518 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.084 { 00:17:49.084 "cntlid": 35, 00:17:49.084 "qid": 0, 00:17:49.084 "state": "enabled", 00:17:49.084 "thread": "nvmf_tgt_poll_group_000", 00:17:49.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:49.084 "listen_address": { 00:17:49.084 "trtype": "TCP", 00:17:49.084 "adrfam": "IPv4", 00:17:49.084 "traddr": "10.0.0.2", 00:17:49.084 "trsvcid": "4420" 00:17:49.084 }, 00:17:49.084 "peer_address": { 00:17:49.084 "trtype": "TCP", 00:17:49.084 "adrfam": "IPv4", 00:17:49.084 "traddr": "10.0.0.1", 00:17:49.084 "trsvcid": "50716" 00:17:49.084 }, 00:17:49.084 "auth": { 00:17:49.084 "state": "completed", 00:17:49.084 "digest": "sha256", 00:17:49.084 "dhgroup": "ffdhe6144" 00:17:49.084 } 00:17:49.084 } 00:17:49.084 ]' 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.084 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.363 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:17:49.363 10:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:17:50.294 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.294 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.294 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:50.294 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.294 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.294 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.294 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.294 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.294 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.552 10:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:51.117 00:17:51.117 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.117 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.117 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.375 { 00:17:51.375 "cntlid": 37, 00:17:51.375 "qid": 0, 00:17:51.375 "state": "enabled", 00:17:51.375 "thread": "nvmf_tgt_poll_group_000", 00:17:51.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:51.375 "listen_address": { 00:17:51.375 "trtype": "TCP", 00:17:51.375 "adrfam": "IPv4", 00:17:51.375 "traddr": "10.0.0.2", 00:17:51.375 "trsvcid": "4420" 00:17:51.375 }, 00:17:51.375 "peer_address": { 00:17:51.375 "trtype": "TCP", 00:17:51.375 "adrfam": "IPv4", 00:17:51.375 "traddr": "10.0.0.1", 00:17:51.375 "trsvcid": "54744" 00:17:51.375 }, 00:17:51.375 "auth": { 00:17:51.375 "state": "completed", 00:17:51.375 "digest": "sha256", 00:17:51.375 "dhgroup": "ffdhe6144" 00:17:51.375 } 00:17:51.375 } 00:17:51.375 ]' 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.375 10:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.632 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:17:51.632 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:17:52.565 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.566 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:52.566 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.566 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.566 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.566 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.566 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.566 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.824 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.390 00:17:53.390 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.390 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.390 10:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.956 { 00:17:53.956 "cntlid": 39, 00:17:53.956 "qid": 0, 00:17:53.956 "state": "enabled", 00:17:53.956 "thread": "nvmf_tgt_poll_group_000", 00:17:53.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:53.956 "listen_address": { 00:17:53.956 "trtype": "TCP", 00:17:53.956 "adrfam": "IPv4", 00:17:53.956 "traddr": "10.0.0.2", 00:17:53.956 "trsvcid": "4420" 00:17:53.956 }, 00:17:53.956 "peer_address": { 00:17:53.956 "trtype": "TCP", 00:17:53.956 "adrfam": "IPv4", 00:17:53.956 "traddr": "10.0.0.1", 00:17:53.956 "trsvcid": "54786" 00:17:53.956 }, 00:17:53.956 "auth": { 00:17:53.956 "state": "completed", 00:17:53.956 "digest": "sha256", 00:17:53.956 "dhgroup": "ffdhe6144" 00:17:53.956 } 00:17:53.956 } 00:17:53.956 ]' 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.956 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.957 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.957 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.215 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:17:54.215 10:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:17:55.148 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.148 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:55.148 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.148 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.148 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.148 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.148 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.148 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.148 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.404 10:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:56.336 00:17:56.336 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.336 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.336 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.594 { 00:17:56.594 "cntlid": 41, 00:17:56.594 "qid": 0, 00:17:56.594 "state": "enabled", 00:17:56.594 "thread": "nvmf_tgt_poll_group_000", 00:17:56.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:56.594 "listen_address": { 00:17:56.594 "trtype": "TCP", 00:17:56.594 "adrfam": "IPv4", 00:17:56.594 "traddr": "10.0.0.2", 00:17:56.594 "trsvcid": "4420" 00:17:56.594 }, 00:17:56.594 "peer_address": { 00:17:56.594 "trtype": "TCP", 00:17:56.594 "adrfam": "IPv4", 00:17:56.594 "traddr": "10.0.0.1", 00:17:56.594 "trsvcid": "54808" 00:17:56.594 }, 00:17:56.594 "auth": { 00:17:56.594 "state": "completed", 00:17:56.594 "digest": "sha256", 00:17:56.594 "dhgroup": "ffdhe8192" 00:17:56.594 } 00:17:56.594 } 00:17:56.594 ]' 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.594 10:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.852 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:56.852 10:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:17:57.785 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.785 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:57.785 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.785 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.785 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.785 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.785 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:57.785 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.042 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:58.974 00:17:58.974 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:58.974 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:58.974 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.231 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.231 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.231 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.231 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.231 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.231 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.231 { 00:17:59.231 "cntlid": 43, 00:17:59.231 "qid": 0, 00:17:59.231 "state": "enabled", 00:17:59.231 "thread": "nvmf_tgt_poll_group_000", 00:17:59.231 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:59.231 "listen_address": { 00:17:59.231 "trtype": "TCP", 00:17:59.231 "adrfam": "IPv4", 00:17:59.231 "traddr": "10.0.0.2", 00:17:59.231 "trsvcid": "4420" 00:17:59.231 }, 00:17:59.231 "peer_address": { 00:17:59.231 "trtype": "TCP", 00:17:59.231 "adrfam": "IPv4", 00:17:59.231 "traddr": "10.0.0.1", 00:17:59.231 "trsvcid": "54848" 00:17:59.231 }, 00:17:59.231 "auth": { 00:17:59.231 "state": "completed", 00:17:59.231 "digest": "sha256", 00:17:59.231 "dhgroup": "ffdhe8192" 00:17:59.231 } 00:17:59.231 } 00:17:59.231 ]' 00:17:59.231 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.231 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.231 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.231 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.231 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.489 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.489 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.489 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.747 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:17:59.747 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:00.681 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.681 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.681 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:00.681 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.681 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.681 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.681 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.681 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.681 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:00.681 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:00.681 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.681 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:00.681 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:00.681 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:00.681 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.681 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.681 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.681 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.939 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.939 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.939 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:00.939 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.505 00:18:01.763 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.763 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.763 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.025 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.025 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.025 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.025 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.025 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.025 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.025 { 00:18:02.025 "cntlid": 45, 00:18:02.025 "qid": 0, 00:18:02.025 "state": "enabled", 00:18:02.025 "thread": "nvmf_tgt_poll_group_000", 00:18:02.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:02.025 "listen_address": { 00:18:02.025 "trtype": "TCP", 00:18:02.025 "adrfam": "IPv4", 00:18:02.025 "traddr": "10.0.0.2", 00:18:02.025 "trsvcid": "4420" 00:18:02.025 }, 00:18:02.025 "peer_address": { 00:18:02.025 "trtype": "TCP", 00:18:02.025 "adrfam": "IPv4", 00:18:02.025 "traddr": "10.0.0.1", 00:18:02.025 "trsvcid": "55738" 00:18:02.025 }, 00:18:02.026 "auth": { 00:18:02.026 "state": "completed", 00:18:02.026 "digest": "sha256", 00:18:02.026 "dhgroup": "ffdhe8192" 00:18:02.026 } 00:18:02.026 } 00:18:02.026 ]' 00:18:02.026 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.026 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.026 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.026 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:02.026 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.026 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.026 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.026 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.284 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:02.284 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:03.216 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.216 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:03.216 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.216 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.216 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.216 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.216 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.216 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.474 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.406 00:18:04.406 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.406 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.406 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.663 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.663 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.663 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.663 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.663 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.663 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.663 { 00:18:04.663 "cntlid": 47, 00:18:04.663 "qid": 0, 00:18:04.663 "state": "enabled", 00:18:04.663 "thread": "nvmf_tgt_poll_group_000", 00:18:04.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:04.663 "listen_address": { 00:18:04.663 "trtype": "TCP", 00:18:04.663 "adrfam": "IPv4", 00:18:04.663 "traddr": "10.0.0.2", 00:18:04.663 "trsvcid": "4420" 00:18:04.663 }, 00:18:04.663 "peer_address": { 00:18:04.663 "trtype": "TCP", 00:18:04.663 "adrfam": "IPv4", 00:18:04.663 "traddr": "10.0.0.1", 00:18:04.663 "trsvcid": "55768" 00:18:04.663 }, 00:18:04.663 "auth": { 00:18:04.663 "state": "completed", 00:18:04.663 "digest": "sha256", 00:18:04.663 "dhgroup": "ffdhe8192" 00:18:04.663 } 00:18:04.663 } 00:18:04.663 ]' 00:18:04.663 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.663 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.663 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.663 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:04.663 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.663 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.663 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.663 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.921 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:04.921 10:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:05.865 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.865 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:05.865 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.865 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.865 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.865 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:05.865 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.865 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.865 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:05.865 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.122 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:06.686 00:18:06.686 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.686 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.686 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.943 { 00:18:06.943 "cntlid": 49, 00:18:06.943 "qid": 0, 00:18:06.943 "state": "enabled", 00:18:06.943 "thread": "nvmf_tgt_poll_group_000", 00:18:06.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:06.943 "listen_address": { 00:18:06.943 "trtype": "TCP", 00:18:06.943 "adrfam": "IPv4", 00:18:06.943 "traddr": "10.0.0.2", 00:18:06.943 "trsvcid": "4420" 00:18:06.943 }, 00:18:06.943 "peer_address": { 00:18:06.943 "trtype": "TCP", 00:18:06.943 "adrfam": "IPv4", 00:18:06.943 "traddr": "10.0.0.1", 00:18:06.943 "trsvcid": "55792" 00:18:06.943 }, 00:18:06.943 "auth": { 00:18:06.943 "state": "completed", 00:18:06.943 "digest": "sha384", 00:18:06.943 "dhgroup": "null" 00:18:06.943 } 00:18:06.943 } 00:18:06.943 ]' 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.943 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.200 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:07.200 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:08.131 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.131 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:08.131 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.131 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.131 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.131 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.131 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.131 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.388 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:08.644 00:18:08.902 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.902 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.902 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.160 { 00:18:09.160 "cntlid": 51, 00:18:09.160 "qid": 0, 00:18:09.160 "state": "enabled", 00:18:09.160 "thread": "nvmf_tgt_poll_group_000", 00:18:09.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:09.160 "listen_address": { 00:18:09.160 "trtype": "TCP", 00:18:09.160 "adrfam": "IPv4", 00:18:09.160 "traddr": "10.0.0.2", 00:18:09.160 "trsvcid": "4420" 00:18:09.160 }, 00:18:09.160 "peer_address": { 00:18:09.160 "trtype": "TCP", 00:18:09.160 "adrfam": "IPv4", 00:18:09.160 "traddr": "10.0.0.1", 00:18:09.160 "trsvcid": "35908" 00:18:09.160 }, 00:18:09.160 "auth": { 00:18:09.160 "state": "completed", 00:18:09.160 "digest": "sha384", 00:18:09.160 "dhgroup": "null" 00:18:09.160 } 00:18:09.160 } 00:18:09.160 ]' 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.160 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.418 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:09.418 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:10.356 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.356 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:10.356 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.356 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.356 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.356 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.356 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:10.356 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:10.613 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:10.613 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.613 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.613 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:10.613 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:10.613 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.613 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.613 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.613 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.613 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.613 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.614 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.614 10:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.871 00:18:10.871 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.871 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.871 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.128 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.128 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.128 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.128 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.128 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.128 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.128 { 00:18:11.128 "cntlid": 53, 00:18:11.128 "qid": 0, 00:18:11.128 "state": "enabled", 00:18:11.128 "thread": "nvmf_tgt_poll_group_000", 00:18:11.128 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:11.128 "listen_address": { 00:18:11.128 "trtype": "TCP", 00:18:11.128 "adrfam": "IPv4", 00:18:11.128 "traddr": "10.0.0.2", 00:18:11.128 "trsvcid": "4420" 00:18:11.128 }, 00:18:11.128 "peer_address": { 00:18:11.128 "trtype": "TCP", 00:18:11.128 "adrfam": "IPv4", 00:18:11.128 "traddr": "10.0.0.1", 00:18:11.128 "trsvcid": "35942" 00:18:11.128 }, 00:18:11.128 "auth": { 00:18:11.128 "state": "completed", 00:18:11.128 "digest": "sha384", 00:18:11.128 "dhgroup": "null" 00:18:11.129 } 00:18:11.129 } 00:18:11.129 ]' 00:18:11.129 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.387 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.387 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.387 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:11.387 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.387 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.387 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.387 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.654 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:11.654 10:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:12.587 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.587 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:12.587 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.587 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.587 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.587 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.587 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:12.587 10:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.844 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.101 00:18:13.101 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.101 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.101 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.359 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.359 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.359 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.359 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.359 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.359 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.359 { 00:18:13.359 "cntlid": 55, 00:18:13.359 "qid": 0, 00:18:13.359 "state": "enabled", 00:18:13.359 "thread": "nvmf_tgt_poll_group_000", 00:18:13.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:13.359 "listen_address": { 00:18:13.359 "trtype": "TCP", 00:18:13.359 "adrfam": "IPv4", 00:18:13.359 "traddr": "10.0.0.2", 00:18:13.359 "trsvcid": "4420" 00:18:13.359 }, 00:18:13.359 "peer_address": { 00:18:13.359 "trtype": "TCP", 00:18:13.359 "adrfam": "IPv4", 00:18:13.359 "traddr": "10.0.0.1", 00:18:13.359 "trsvcid": "35978" 00:18:13.359 }, 00:18:13.359 "auth": { 00:18:13.359 "state": "completed", 00:18:13.359 "digest": "sha384", 00:18:13.359 "dhgroup": "null" 00:18:13.359 } 00:18:13.359 } 00:18:13.359 ]' 00:18:13.359 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.359 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:13.617 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.617 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:13.617 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.617 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.617 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.617 10:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.935 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:13.935 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:14.866 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.866 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:14.866 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.866 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.866 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.866 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.866 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.866 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:14.866 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.125 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:15.382 00:18:15.382 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.382 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.382 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.641 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.641 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.641 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.641 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.641 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.641 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.641 { 00:18:15.641 "cntlid": 57, 00:18:15.641 "qid": 0, 00:18:15.641 "state": "enabled", 00:18:15.641 "thread": "nvmf_tgt_poll_group_000", 00:18:15.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:15.641 "listen_address": { 00:18:15.641 "trtype": "TCP", 00:18:15.641 "adrfam": "IPv4", 00:18:15.641 "traddr": "10.0.0.2", 00:18:15.641 "trsvcid": "4420" 00:18:15.641 }, 00:18:15.641 "peer_address": { 00:18:15.641 "trtype": "TCP", 00:18:15.641 "adrfam": "IPv4", 00:18:15.641 "traddr": "10.0.0.1", 00:18:15.641 "trsvcid": "36008" 00:18:15.641 }, 00:18:15.641 "auth": { 00:18:15.641 "state": "completed", 00:18:15.641 "digest": "sha384", 00:18:15.641 "dhgroup": "ffdhe2048" 00:18:15.641 } 00:18:15.641 } 00:18:15.641 ]' 00:18:15.641 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.641 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.641 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.641 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:15.641 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.641 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.641 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.641 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.207 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:16.207 10:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.140 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.706 00:18:17.706 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.706 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.706 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.963 { 00:18:17.963 "cntlid": 59, 00:18:17.963 "qid": 0, 00:18:17.963 "state": "enabled", 00:18:17.963 "thread": "nvmf_tgt_poll_group_000", 00:18:17.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:17.963 "listen_address": { 00:18:17.963 "trtype": "TCP", 00:18:17.963 "adrfam": "IPv4", 00:18:17.963 "traddr": "10.0.0.2", 00:18:17.963 "trsvcid": "4420" 00:18:17.963 }, 00:18:17.963 "peer_address": { 00:18:17.963 "trtype": "TCP", 00:18:17.963 "adrfam": "IPv4", 00:18:17.963 "traddr": "10.0.0.1", 00:18:17.963 "trsvcid": "36036" 00:18:17.963 }, 00:18:17.963 "auth": { 00:18:17.963 "state": "completed", 00:18:17.963 "digest": "sha384", 00:18:17.963 "dhgroup": "ffdhe2048" 00:18:17.963 } 00:18:17.963 } 00:18:17.963 ]' 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.963 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.220 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:18.220 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:19.216 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.216 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:19.216 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.216 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.216 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.216 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.216 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:19.216 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.521 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:19.814 00:18:19.814 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.814 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.814 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.071 { 00:18:20.071 "cntlid": 61, 00:18:20.071 "qid": 0, 00:18:20.071 "state": "enabled", 00:18:20.071 "thread": "nvmf_tgt_poll_group_000", 00:18:20.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:20.071 "listen_address": { 00:18:20.071 "trtype": "TCP", 00:18:20.071 "adrfam": "IPv4", 00:18:20.071 "traddr": "10.0.0.2", 00:18:20.071 "trsvcid": "4420" 00:18:20.071 }, 00:18:20.071 "peer_address": { 00:18:20.071 "trtype": "TCP", 00:18:20.071 "adrfam": "IPv4", 00:18:20.071 "traddr": "10.0.0.1", 00:18:20.071 "trsvcid": "34526" 00:18:20.071 }, 00:18:20.071 "auth": { 00:18:20.071 "state": "completed", 00:18:20.071 "digest": "sha384", 00:18:20.071 "dhgroup": "ffdhe2048" 00:18:20.071 } 00:18:20.071 } 00:18:20.071 ]' 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.071 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.636 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:20.636 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:21.568 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.568 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:21.568 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.568 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.568 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.568 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.568 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.568 10:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:21.825 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:22.083 00:18:22.083 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.083 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.083 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.340 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.340 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.340 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.340 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.340 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.340 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.340 { 00:18:22.340 "cntlid": 63, 00:18:22.340 "qid": 0, 00:18:22.340 "state": "enabled", 00:18:22.340 "thread": "nvmf_tgt_poll_group_000", 00:18:22.340 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:22.340 "listen_address": { 00:18:22.340 "trtype": "TCP", 00:18:22.340 "adrfam": "IPv4", 00:18:22.340 "traddr": "10.0.0.2", 00:18:22.340 "trsvcid": "4420" 00:18:22.340 }, 00:18:22.340 "peer_address": { 00:18:22.340 "trtype": "TCP", 00:18:22.340 "adrfam": "IPv4", 00:18:22.340 "traddr": "10.0.0.1", 00:18:22.340 "trsvcid": "34566" 00:18:22.340 }, 00:18:22.340 "auth": { 00:18:22.340 "state": "completed", 00:18:22.340 "digest": "sha384", 00:18:22.340 "dhgroup": "ffdhe2048" 00:18:22.340 } 00:18:22.340 } 00:18:22.340 ]' 00:18:22.340 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.340 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:22.340 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.596 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:22.596 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.596 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.596 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.596 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.851 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:22.851 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:23.779 10:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.779 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:23.779 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.779 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.779 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.779 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.779 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.779 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:23.779 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.036 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.294 00:18:24.294 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.294 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.294 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.551 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.551 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.551 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.551 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.551 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.551 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.551 { 00:18:24.551 "cntlid": 65, 00:18:24.551 "qid": 0, 00:18:24.551 "state": "enabled", 00:18:24.551 "thread": "nvmf_tgt_poll_group_000", 00:18:24.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:24.551 "listen_address": { 00:18:24.551 "trtype": "TCP", 00:18:24.551 "adrfam": "IPv4", 00:18:24.551 "traddr": "10.0.0.2", 00:18:24.551 "trsvcid": "4420" 00:18:24.551 }, 00:18:24.551 "peer_address": { 00:18:24.551 "trtype": "TCP", 00:18:24.551 "adrfam": "IPv4", 00:18:24.551 "traddr": "10.0.0.1", 00:18:24.551 "trsvcid": "34594" 00:18:24.551 }, 00:18:24.551 "auth": { 00:18:24.551 "state": "completed", 00:18:24.551 "digest": "sha384", 00:18:24.551 "dhgroup": "ffdhe3072" 00:18:24.551 } 00:18:24.551 } 00:18:24.551 ]' 00:18:24.551 10:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.808 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:24.808 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.808 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.808 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.808 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.808 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.808 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.066 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:25.066 10:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:26.000 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.000 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:26.000 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.000 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.000 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.000 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.000 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:26.000 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.258 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:26.824 00:18:26.824 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.824 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.824 10:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.101 { 00:18:27.101 "cntlid": 67, 00:18:27.101 "qid": 0, 00:18:27.101 "state": "enabled", 00:18:27.101 "thread": "nvmf_tgt_poll_group_000", 00:18:27.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:27.101 "listen_address": { 00:18:27.101 "trtype": "TCP", 00:18:27.101 "adrfam": "IPv4", 00:18:27.101 "traddr": "10.0.0.2", 00:18:27.101 "trsvcid": "4420" 00:18:27.101 }, 00:18:27.101 "peer_address": { 00:18:27.101 "trtype": "TCP", 00:18:27.101 "adrfam": "IPv4", 00:18:27.101 "traddr": "10.0.0.1", 00:18:27.101 "trsvcid": "34612" 00:18:27.101 }, 00:18:27.101 "auth": { 00:18:27.101 "state": "completed", 00:18:27.101 "digest": "sha384", 00:18:27.101 "dhgroup": "ffdhe3072" 00:18:27.101 } 00:18:27.101 } 00:18:27.101 ]' 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.101 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.364 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:27.364 10:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:28.297 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.297 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:28.297 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.297 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.297 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.297 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.297 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.297 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:28.557 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.123 00:18:29.123 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.123 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.123 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.380 { 00:18:29.380 "cntlid": 69, 00:18:29.380 "qid": 0, 00:18:29.380 "state": "enabled", 00:18:29.380 "thread": "nvmf_tgt_poll_group_000", 00:18:29.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:29.380 "listen_address": { 00:18:29.380 "trtype": "TCP", 00:18:29.380 "adrfam": "IPv4", 00:18:29.380 "traddr": "10.0.0.2", 00:18:29.380 "trsvcid": "4420" 00:18:29.380 }, 00:18:29.380 "peer_address": { 00:18:29.380 "trtype": "TCP", 00:18:29.380 "adrfam": "IPv4", 00:18:29.380 "traddr": "10.0.0.1", 00:18:29.380 "trsvcid": "54332" 00:18:29.380 }, 00:18:29.380 "auth": { 00:18:29.380 "state": "completed", 00:18:29.380 "digest": "sha384", 00:18:29.380 "dhgroup": "ffdhe3072" 00:18:29.380 } 00:18:29.380 } 00:18:29.380 ]' 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.380 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.637 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:29.637 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:30.566 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.566 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:30.566 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.566 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.566 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.566 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.566 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.566 10:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:30.822 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.445 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.445 { 00:18:31.445 "cntlid": 71, 00:18:31.445 "qid": 0, 00:18:31.445 "state": "enabled", 00:18:31.445 "thread": "nvmf_tgt_poll_group_000", 00:18:31.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:31.445 "listen_address": { 00:18:31.445 "trtype": "TCP", 00:18:31.445 "adrfam": "IPv4", 00:18:31.445 "traddr": "10.0.0.2", 00:18:31.445 "trsvcid": "4420" 00:18:31.445 }, 00:18:31.445 "peer_address": { 00:18:31.445 "trtype": "TCP", 00:18:31.445 "adrfam": "IPv4", 00:18:31.445 "traddr": "10.0.0.1", 00:18:31.445 "trsvcid": "54364" 00:18:31.445 }, 00:18:31.445 "auth": { 00:18:31.445 "state": "completed", 00:18:31.445 "digest": "sha384", 00:18:31.445 "dhgroup": "ffdhe3072" 00:18:31.445 } 00:18:31.445 } 00:18:31.445 ]' 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:31.445 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.703 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:31.703 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.703 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.703 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.703 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.960 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:31.960 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:32.910 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.910 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.910 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:32.910 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.910 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.910 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.910 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.910 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.910 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:32.910 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.167 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.730 00:18:33.730 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.730 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.730 10:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.730 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.730 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.730 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.730 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.731 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.731 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.731 { 00:18:33.731 "cntlid": 73, 00:18:33.731 "qid": 0, 00:18:33.731 "state": "enabled", 00:18:33.731 "thread": "nvmf_tgt_poll_group_000", 00:18:33.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:33.731 "listen_address": { 00:18:33.731 "trtype": "TCP", 00:18:33.731 "adrfam": "IPv4", 00:18:33.731 "traddr": "10.0.0.2", 00:18:33.731 "trsvcid": "4420" 00:18:33.731 }, 00:18:33.731 "peer_address": { 00:18:33.731 "trtype": "TCP", 00:18:33.731 "adrfam": "IPv4", 00:18:33.731 "traddr": "10.0.0.1", 00:18:33.731 "trsvcid": "54394" 00:18:33.731 }, 00:18:33.731 "auth": { 00:18:33.731 "state": "completed", 00:18:33.731 "digest": "sha384", 00:18:33.731 "dhgroup": "ffdhe4096" 00:18:33.731 } 00:18:33.731 } 00:18:33.731 ]' 00:18:33.731 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.988 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.988 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.988 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:33.988 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.988 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.988 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.988 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.246 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:34.246 10:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:35.179 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.179 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:35.179 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.179 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.179 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.179 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.179 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.179 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.437 10:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.695 00:18:35.695 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.695 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.695 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.953 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.953 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.953 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.953 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.953 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.953 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.953 { 00:18:35.953 "cntlid": 75, 00:18:35.953 "qid": 0, 00:18:35.953 "state": "enabled", 00:18:35.953 "thread": "nvmf_tgt_poll_group_000", 00:18:35.953 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:35.953 "listen_address": { 00:18:35.953 "trtype": "TCP", 00:18:35.953 "adrfam": "IPv4", 00:18:35.953 "traddr": "10.0.0.2", 00:18:35.953 "trsvcid": "4420" 00:18:35.953 }, 00:18:35.953 "peer_address": { 00:18:35.953 "trtype": "TCP", 00:18:35.953 "adrfam": "IPv4", 00:18:35.953 "traddr": "10.0.0.1", 00:18:35.953 "trsvcid": "54420" 00:18:35.953 }, 00:18:35.953 "auth": { 00:18:35.953 "state": "completed", 00:18:35.953 "digest": "sha384", 00:18:35.953 "dhgroup": "ffdhe4096" 00:18:35.953 } 00:18:35.953 } 00:18:35.953 ]' 00:18:35.953 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.212 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:36.212 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.212 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:36.212 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.212 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.212 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.212 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.470 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:36.470 10:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:37.404 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.404 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:37.404 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.404 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.404 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.404 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.404 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.404 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:37.663 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:38.228 00:18:38.228 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.228 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.228 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.486 { 00:18:38.486 "cntlid": 77, 00:18:38.486 "qid": 0, 00:18:38.486 "state": "enabled", 00:18:38.486 "thread": "nvmf_tgt_poll_group_000", 00:18:38.486 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:38.486 "listen_address": { 00:18:38.486 "trtype": "TCP", 00:18:38.486 "adrfam": "IPv4", 00:18:38.486 "traddr": "10.0.0.2", 00:18:38.486 "trsvcid": "4420" 00:18:38.486 }, 00:18:38.486 "peer_address": { 00:18:38.486 "trtype": "TCP", 00:18:38.486 "adrfam": "IPv4", 00:18:38.486 "traddr": "10.0.0.1", 00:18:38.486 "trsvcid": "54446" 00:18:38.486 }, 00:18:38.486 "auth": { 00:18:38.486 "state": "completed", 00:18:38.486 "digest": "sha384", 00:18:38.486 "dhgroup": "ffdhe4096" 00:18:38.486 } 00:18:38.486 } 00:18:38.486 ]' 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.486 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.744 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:38.744 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:39.675 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.675 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:39.675 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.675 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.675 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.675 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.675 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.675 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:39.933 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:40.497 00:18:40.497 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.497 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.497 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.754 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.754 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.754 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.754 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.754 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.754 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.754 { 00:18:40.754 "cntlid": 79, 00:18:40.754 "qid": 0, 00:18:40.754 "state": "enabled", 00:18:40.754 "thread": "nvmf_tgt_poll_group_000", 00:18:40.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:40.754 "listen_address": { 00:18:40.754 "trtype": "TCP", 00:18:40.754 "adrfam": "IPv4", 00:18:40.754 "traddr": "10.0.0.2", 00:18:40.754 "trsvcid": "4420" 00:18:40.754 }, 00:18:40.754 "peer_address": { 00:18:40.754 "trtype": "TCP", 00:18:40.754 "adrfam": "IPv4", 00:18:40.754 "traddr": "10.0.0.1", 00:18:40.754 "trsvcid": "43788" 00:18:40.754 }, 00:18:40.754 "auth": { 00:18:40.754 "state": "completed", 00:18:40.754 "digest": "sha384", 00:18:40.754 "dhgroup": "ffdhe4096" 00:18:40.754 } 00:18:40.754 } 00:18:40.754 ]' 00:18:40.754 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.754 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:40.754 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.754 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.754 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.754 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.754 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.754 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.020 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:41.020 10:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:41.991 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.991 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:41.991 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.991 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.991 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.991 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:41.991 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.991 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:41.991 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.248 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:42.813 00:18:42.813 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.813 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.813 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.071 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.071 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.071 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.071 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.071 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.071 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.071 { 00:18:43.071 "cntlid": 81, 00:18:43.071 "qid": 0, 00:18:43.071 "state": "enabled", 00:18:43.071 "thread": "nvmf_tgt_poll_group_000", 00:18:43.071 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:43.071 "listen_address": { 00:18:43.071 "trtype": "TCP", 00:18:43.071 "adrfam": "IPv4", 00:18:43.071 "traddr": "10.0.0.2", 00:18:43.071 "trsvcid": "4420" 00:18:43.071 }, 00:18:43.071 "peer_address": { 00:18:43.071 "trtype": "TCP", 00:18:43.071 "adrfam": "IPv4", 00:18:43.071 "traddr": "10.0.0.1", 00:18:43.071 "trsvcid": "43816" 00:18:43.071 }, 00:18:43.071 "auth": { 00:18:43.071 "state": "completed", 00:18:43.071 "digest": "sha384", 00:18:43.071 "dhgroup": "ffdhe6144" 00:18:43.071 } 00:18:43.071 } 00:18:43.071 ]' 00:18:43.071 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.328 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.328 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.328 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:43.328 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.328 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.328 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.329 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.586 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:43.586 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:44.520 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.520 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:44.520 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.520 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.520 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.520 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.520 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.520 10:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:44.778 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:44.778 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.778 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:44.778 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:44.778 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:44.778 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.778 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.778 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.778 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.778 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.779 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.779 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:44.779 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.345 00:18:45.345 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.345 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.345 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.603 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.603 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.603 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.603 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.603 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.603 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.603 { 00:18:45.603 "cntlid": 83, 00:18:45.603 "qid": 0, 00:18:45.603 "state": "enabled", 00:18:45.603 "thread": "nvmf_tgt_poll_group_000", 00:18:45.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:45.603 "listen_address": { 00:18:45.603 "trtype": "TCP", 00:18:45.603 "adrfam": "IPv4", 00:18:45.603 "traddr": "10.0.0.2", 00:18:45.603 "trsvcid": "4420" 00:18:45.603 }, 00:18:45.603 "peer_address": { 00:18:45.603 "trtype": "TCP", 00:18:45.603 "adrfam": "IPv4", 00:18:45.603 "traddr": "10.0.0.1", 00:18:45.603 "trsvcid": "43848" 00:18:45.603 }, 00:18:45.603 "auth": { 00:18:45.603 "state": "completed", 00:18:45.603 "digest": "sha384", 00:18:45.603 "dhgroup": "ffdhe6144" 00:18:45.603 } 00:18:45.603 } 00:18:45.603 ]' 00:18:45.603 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.603 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.603 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.603 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.603 10:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.603 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.603 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.603 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.861 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:45.861 10:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:46.802 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.802 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.802 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:46.802 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.802 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.802 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.802 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.802 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:46.802 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.060 10:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.627 00:18:47.627 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.627 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.627 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.885 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.885 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.885 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.885 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.885 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.885 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.885 { 00:18:47.885 "cntlid": 85, 00:18:47.885 "qid": 0, 00:18:47.885 "state": "enabled", 00:18:47.885 "thread": "nvmf_tgt_poll_group_000", 00:18:47.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:47.885 "listen_address": { 00:18:47.885 "trtype": "TCP", 00:18:47.885 "adrfam": "IPv4", 00:18:47.885 "traddr": "10.0.0.2", 00:18:47.885 "trsvcid": "4420" 00:18:47.885 }, 00:18:47.885 "peer_address": { 00:18:47.885 "trtype": "TCP", 00:18:47.885 "adrfam": "IPv4", 00:18:47.885 "traddr": "10.0.0.1", 00:18:47.885 "trsvcid": "43884" 00:18:47.885 }, 00:18:47.885 "auth": { 00:18:47.885 "state": "completed", 00:18:47.885 "digest": "sha384", 00:18:47.885 "dhgroup": "ffdhe6144" 00:18:47.885 } 00:18:47.885 } 00:18:47.885 ]' 00:18:47.885 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.144 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:48.144 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.144 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:48.144 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.144 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.144 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.144 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.402 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:48.402 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:49.333 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.334 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:49.334 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.334 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.334 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.334 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.334 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.334 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:49.591 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.155 00:18:50.155 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.155 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.155 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.412 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.412 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.412 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.412 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.412 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.412 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.412 { 00:18:50.412 "cntlid": 87, 00:18:50.412 "qid": 0, 00:18:50.412 "state": "enabled", 00:18:50.412 "thread": "nvmf_tgt_poll_group_000", 00:18:50.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:50.412 "listen_address": { 00:18:50.412 "trtype": "TCP", 00:18:50.412 "adrfam": "IPv4", 00:18:50.412 "traddr": "10.0.0.2", 00:18:50.412 "trsvcid": "4420" 00:18:50.412 }, 00:18:50.412 "peer_address": { 00:18:50.412 "trtype": "TCP", 00:18:50.412 "adrfam": "IPv4", 00:18:50.412 "traddr": "10.0.0.1", 00:18:50.412 "trsvcid": "49192" 00:18:50.412 }, 00:18:50.412 "auth": { 00:18:50.412 "state": "completed", 00:18:50.412 "digest": "sha384", 00:18:50.412 "dhgroup": "ffdhe6144" 00:18:50.412 } 00:18:50.412 } 00:18:50.412 ]' 00:18:50.412 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.676 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.676 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.676 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.676 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.676 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.676 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.676 10:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.933 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:50.933 10:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:18:51.864 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.864 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:51.864 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.864 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.864 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.864 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.864 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.864 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:51.864 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.122 10:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.054 00:18:53.054 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.054 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.054 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.054 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.054 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.054 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.054 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.310 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.310 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.310 { 00:18:53.310 "cntlid": 89, 00:18:53.310 "qid": 0, 00:18:53.310 "state": "enabled", 00:18:53.310 "thread": "nvmf_tgt_poll_group_000", 00:18:53.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:53.310 "listen_address": { 00:18:53.310 "trtype": "TCP", 00:18:53.310 "adrfam": "IPv4", 00:18:53.310 "traddr": "10.0.0.2", 00:18:53.310 "trsvcid": "4420" 00:18:53.310 }, 00:18:53.310 "peer_address": { 00:18:53.310 "trtype": "TCP", 00:18:53.310 "adrfam": "IPv4", 00:18:53.310 "traddr": "10.0.0.1", 00:18:53.310 "trsvcid": "49216" 00:18:53.310 }, 00:18:53.310 "auth": { 00:18:53.310 "state": "completed", 00:18:53.310 "digest": "sha384", 00:18:53.310 "dhgroup": "ffdhe8192" 00:18:53.310 } 00:18:53.310 } 00:18:53.310 ]' 00:18:53.310 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.310 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.310 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.310 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:53.310 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.310 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.310 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.310 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.566 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:53.566 10:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:18:54.499 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.499 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:54.499 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.499 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.499 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.499 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.499 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.499 10:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.757 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.692 00:18:55.692 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.692 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.692 10:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.950 { 00:18:55.950 "cntlid": 91, 00:18:55.950 "qid": 0, 00:18:55.950 "state": "enabled", 00:18:55.950 "thread": "nvmf_tgt_poll_group_000", 00:18:55.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:55.950 "listen_address": { 00:18:55.950 "trtype": "TCP", 00:18:55.950 "adrfam": "IPv4", 00:18:55.950 "traddr": "10.0.0.2", 00:18:55.950 "trsvcid": "4420" 00:18:55.950 }, 00:18:55.950 "peer_address": { 00:18:55.950 "trtype": "TCP", 00:18:55.950 "adrfam": "IPv4", 00:18:55.950 "traddr": "10.0.0.1", 00:18:55.950 "trsvcid": "49252" 00:18:55.950 }, 00:18:55.950 "auth": { 00:18:55.950 "state": "completed", 00:18:55.950 "digest": "sha384", 00:18:55.950 "dhgroup": "ffdhe8192" 00:18:55.950 } 00:18:55.950 } 00:18:55.950 ]' 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.950 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.208 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:56.208 10:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:18:57.142 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.142 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:57.142 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.142 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.142 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.142 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.142 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:57.142 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:57.399 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:57.399 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.399 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.399 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:57.399 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:57.399 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.399 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.399 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.399 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.399 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.400 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.400 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.400 10:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.331 00:18:58.331 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.331 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.331 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.587 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.587 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.587 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.587 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.587 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.587 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.587 { 00:18:58.587 "cntlid": 93, 00:18:58.587 "qid": 0, 00:18:58.587 "state": "enabled", 00:18:58.587 "thread": "nvmf_tgt_poll_group_000", 00:18:58.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:58.587 "listen_address": { 00:18:58.587 "trtype": "TCP", 00:18:58.587 "adrfam": "IPv4", 00:18:58.587 "traddr": "10.0.0.2", 00:18:58.587 "trsvcid": "4420" 00:18:58.587 }, 00:18:58.587 "peer_address": { 00:18:58.587 "trtype": "TCP", 00:18:58.587 "adrfam": "IPv4", 00:18:58.587 "traddr": "10.0.0.1", 00:18:58.587 "trsvcid": "49260" 00:18:58.587 }, 00:18:58.587 "auth": { 00:18:58.587 "state": "completed", 00:18:58.587 "digest": "sha384", 00:18:58.587 "dhgroup": "ffdhe8192" 00:18:58.587 } 00:18:58.587 } 00:18:58.587 ]' 00:18:58.587 10:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.587 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.587 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.845 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.845 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.845 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.845 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.845 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.102 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:18:59.102 10:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:00.034 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.034 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:00.034 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.034 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.034 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.034 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.034 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.034 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:00.291 10:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:01.226 00:19:01.226 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.226 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.226 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.484 { 00:19:01.484 "cntlid": 95, 00:19:01.484 "qid": 0, 00:19:01.484 "state": "enabled", 00:19:01.484 "thread": "nvmf_tgt_poll_group_000", 00:19:01.484 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:01.484 "listen_address": { 00:19:01.484 "trtype": "TCP", 00:19:01.484 "adrfam": "IPv4", 00:19:01.484 "traddr": "10.0.0.2", 00:19:01.484 "trsvcid": "4420" 00:19:01.484 }, 00:19:01.484 "peer_address": { 00:19:01.484 "trtype": "TCP", 00:19:01.484 "adrfam": "IPv4", 00:19:01.484 "traddr": "10.0.0.1", 00:19:01.484 "trsvcid": "48646" 00:19:01.484 }, 00:19:01.484 "auth": { 00:19:01.484 "state": "completed", 00:19:01.484 "digest": "sha384", 00:19:01.484 "dhgroup": "ffdhe8192" 00:19:01.484 } 00:19:01.484 } 00:19:01.484 ]' 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.484 10:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.743 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:01.743 10:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:02.678 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.678 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.678 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.678 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.678 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.678 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:02.678 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.678 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.678 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.678 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.935 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.500 00:19:03.500 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.500 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.500 10:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.758 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.758 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.758 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.758 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.758 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.758 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.758 { 00:19:03.758 "cntlid": 97, 00:19:03.758 "qid": 0, 00:19:03.758 "state": "enabled", 00:19:03.758 "thread": "nvmf_tgt_poll_group_000", 00:19:03.758 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:03.758 "listen_address": { 00:19:03.758 "trtype": "TCP", 00:19:03.758 "adrfam": "IPv4", 00:19:03.758 "traddr": "10.0.0.2", 00:19:03.758 "trsvcid": "4420" 00:19:03.758 }, 00:19:03.758 "peer_address": { 00:19:03.758 "trtype": "TCP", 00:19:03.758 "adrfam": "IPv4", 00:19:03.758 "traddr": "10.0.0.1", 00:19:03.758 "trsvcid": "48660" 00:19:03.758 }, 00:19:03.758 "auth": { 00:19:03.758 "state": "completed", 00:19:03.758 "digest": "sha512", 00:19:03.758 "dhgroup": "null" 00:19:03.758 } 00:19:03.758 } 00:19:03.758 ]' 00:19:03.758 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.758 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.758 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.759 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:03.759 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.759 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.759 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.759 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.016 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:04.016 10:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:04.951 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.951 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:04.951 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.951 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.951 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.951 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.951 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:04.951 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:05.209 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.210 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.776 00:19:05.776 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.776 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.776 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.033 { 00:19:06.033 "cntlid": 99, 00:19:06.033 "qid": 0, 00:19:06.033 "state": "enabled", 00:19:06.033 "thread": "nvmf_tgt_poll_group_000", 00:19:06.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:06.033 "listen_address": { 00:19:06.033 "trtype": "TCP", 00:19:06.033 "adrfam": "IPv4", 00:19:06.033 "traddr": "10.0.0.2", 00:19:06.033 "trsvcid": "4420" 00:19:06.033 }, 00:19:06.033 "peer_address": { 00:19:06.033 "trtype": "TCP", 00:19:06.033 "adrfam": "IPv4", 00:19:06.033 "traddr": "10.0.0.1", 00:19:06.033 "trsvcid": "48676" 00:19:06.033 }, 00:19:06.033 "auth": { 00:19:06.033 "state": "completed", 00:19:06.033 "digest": "sha512", 00:19:06.033 "dhgroup": "null" 00:19:06.033 } 00:19:06.033 } 00:19:06.033 ]' 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.033 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.290 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:06.290 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:07.221 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.221 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:07.221 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.221 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.221 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.221 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.221 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.221 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.479 10:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.042 00:19:08.042 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.042 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.042 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.300 { 00:19:08.300 "cntlid": 101, 00:19:08.300 "qid": 0, 00:19:08.300 "state": "enabled", 00:19:08.300 "thread": "nvmf_tgt_poll_group_000", 00:19:08.300 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:08.300 "listen_address": { 00:19:08.300 "trtype": "TCP", 00:19:08.300 "adrfam": "IPv4", 00:19:08.300 "traddr": "10.0.0.2", 00:19:08.300 "trsvcid": "4420" 00:19:08.300 }, 00:19:08.300 "peer_address": { 00:19:08.300 "trtype": "TCP", 00:19:08.300 "adrfam": "IPv4", 00:19:08.300 "traddr": "10.0.0.1", 00:19:08.300 "trsvcid": "48702" 00:19:08.300 }, 00:19:08.300 "auth": { 00:19:08.300 "state": "completed", 00:19:08.300 "digest": "sha512", 00:19:08.300 "dhgroup": "null" 00:19:08.300 } 00:19:08.300 } 00:19:08.300 ]' 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.300 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.557 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:08.557 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:09.488 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.488 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:09.488 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.488 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.488 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.488 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.488 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:09.488 10:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:09.745 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:09.746 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.003 00:19:10.261 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.261 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.261 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.520 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.520 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.520 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.520 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.520 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.520 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.520 { 00:19:10.520 "cntlid": 103, 00:19:10.520 "qid": 0, 00:19:10.520 "state": "enabled", 00:19:10.520 "thread": "nvmf_tgt_poll_group_000", 00:19:10.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:10.520 "listen_address": { 00:19:10.520 "trtype": "TCP", 00:19:10.520 "adrfam": "IPv4", 00:19:10.520 "traddr": "10.0.0.2", 00:19:10.520 "trsvcid": "4420" 00:19:10.520 }, 00:19:10.520 "peer_address": { 00:19:10.520 "trtype": "TCP", 00:19:10.520 "adrfam": "IPv4", 00:19:10.520 "traddr": "10.0.0.1", 00:19:10.520 "trsvcid": "34418" 00:19:10.520 }, 00:19:10.521 "auth": { 00:19:10.521 "state": "completed", 00:19:10.521 "digest": "sha512", 00:19:10.521 "dhgroup": "null" 00:19:10.521 } 00:19:10.521 } 00:19:10.521 ]' 00:19:10.521 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.521 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:10.521 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.521 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:10.521 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.521 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.521 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.521 10:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.780 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:10.780 10:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:11.715 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.715 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:11.715 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.716 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.716 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.716 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.716 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.716 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.716 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.974 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.232 00:19:12.490 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.490 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.490 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.748 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.748 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.748 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.748 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.748 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.748 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.748 { 00:19:12.748 "cntlid": 105, 00:19:12.748 "qid": 0, 00:19:12.748 "state": "enabled", 00:19:12.748 "thread": "nvmf_tgt_poll_group_000", 00:19:12.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:12.748 "listen_address": { 00:19:12.748 "trtype": "TCP", 00:19:12.748 "adrfam": "IPv4", 00:19:12.748 "traddr": "10.0.0.2", 00:19:12.748 "trsvcid": "4420" 00:19:12.748 }, 00:19:12.748 "peer_address": { 00:19:12.748 "trtype": "TCP", 00:19:12.748 "adrfam": "IPv4", 00:19:12.748 "traddr": "10.0.0.1", 00:19:12.748 "trsvcid": "34436" 00:19:12.748 }, 00:19:12.748 "auth": { 00:19:12.748 "state": "completed", 00:19:12.748 "digest": "sha512", 00:19:12.748 "dhgroup": "ffdhe2048" 00:19:12.748 } 00:19:12.748 } 00:19:12.748 ]' 00:19:12.748 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.748 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.748 10:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.748 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:12.748 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.748 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.748 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.748 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.006 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:13.006 10:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:13.940 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.940 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:13.940 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.940 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.940 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.940 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.940 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:13.940 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.199 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.517 00:19:14.517 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.517 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.517 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.808 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.808 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.808 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.808 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.808 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.808 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.808 { 00:19:14.808 "cntlid": 107, 00:19:14.808 "qid": 0, 00:19:14.808 "state": "enabled", 00:19:14.808 "thread": "nvmf_tgt_poll_group_000", 00:19:14.808 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:14.808 "listen_address": { 00:19:14.808 "trtype": "TCP", 00:19:14.808 "adrfam": "IPv4", 00:19:14.808 "traddr": "10.0.0.2", 00:19:14.808 "trsvcid": "4420" 00:19:14.808 }, 00:19:14.808 "peer_address": { 00:19:14.808 "trtype": "TCP", 00:19:14.808 "adrfam": "IPv4", 00:19:14.808 "traddr": "10.0.0.1", 00:19:14.808 "trsvcid": "34468" 00:19:14.808 }, 00:19:14.808 "auth": { 00:19:14.808 "state": "completed", 00:19:14.808 "digest": "sha512", 00:19:14.808 "dhgroup": "ffdhe2048" 00:19:14.808 } 00:19:14.808 } 00:19:14.808 ]' 00:19:14.808 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.808 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:14.808 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.090 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:15.090 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.090 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.090 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.090 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.347 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:15.348 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:16.276 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.276 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:16.276 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.276 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.276 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.276 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.276 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.276 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.533 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.789 00:19:16.789 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:16.789 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:16.789 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.046 { 00:19:17.046 "cntlid": 109, 00:19:17.046 "qid": 0, 00:19:17.046 "state": "enabled", 00:19:17.046 "thread": "nvmf_tgt_poll_group_000", 00:19:17.046 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:17.046 "listen_address": { 00:19:17.046 "trtype": "TCP", 00:19:17.046 "adrfam": "IPv4", 00:19:17.046 "traddr": "10.0.0.2", 00:19:17.046 "trsvcid": "4420" 00:19:17.046 }, 00:19:17.046 "peer_address": { 00:19:17.046 "trtype": "TCP", 00:19:17.046 "adrfam": "IPv4", 00:19:17.046 "traddr": "10.0.0.1", 00:19:17.046 "trsvcid": "34492" 00:19:17.046 }, 00:19:17.046 "auth": { 00:19:17.046 "state": "completed", 00:19:17.046 "digest": "sha512", 00:19:17.046 "dhgroup": "ffdhe2048" 00:19:17.046 } 00:19:17.046 } 00:19:17.046 ]' 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.046 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.303 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:17.303 10:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:18.234 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.234 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:18.234 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.234 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.234 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.234 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.234 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.234 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:18.491 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:19.055 00:19:19.055 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.055 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.055 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.312 { 00:19:19.312 "cntlid": 111, 00:19:19.312 "qid": 0, 00:19:19.312 "state": "enabled", 00:19:19.312 "thread": "nvmf_tgt_poll_group_000", 00:19:19.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:19.312 "listen_address": { 00:19:19.312 "trtype": "TCP", 00:19:19.312 "adrfam": "IPv4", 00:19:19.312 "traddr": "10.0.0.2", 00:19:19.312 "trsvcid": "4420" 00:19:19.312 }, 00:19:19.312 "peer_address": { 00:19:19.312 "trtype": "TCP", 00:19:19.312 "adrfam": "IPv4", 00:19:19.312 "traddr": "10.0.0.1", 00:19:19.312 "trsvcid": "40852" 00:19:19.312 }, 00:19:19.312 "auth": { 00:19:19.312 "state": "completed", 00:19:19.312 "digest": "sha512", 00:19:19.312 "dhgroup": "ffdhe2048" 00:19:19.312 } 00:19:19.312 } 00:19:19.312 ]' 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.312 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.569 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:19.569 10:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:20.503 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.503 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.503 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:20.504 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.504 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.504 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.504 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:20.504 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.504 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.504 10:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:20.762 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.021 00:19:21.021 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.021 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.021 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.280 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.537 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.537 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.537 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.537 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.537 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.537 { 00:19:21.537 "cntlid": 113, 00:19:21.537 "qid": 0, 00:19:21.537 "state": "enabled", 00:19:21.537 "thread": "nvmf_tgt_poll_group_000", 00:19:21.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:21.537 "listen_address": { 00:19:21.537 "trtype": "TCP", 00:19:21.537 "adrfam": "IPv4", 00:19:21.537 "traddr": "10.0.0.2", 00:19:21.537 "trsvcid": "4420" 00:19:21.537 }, 00:19:21.537 "peer_address": { 00:19:21.537 "trtype": "TCP", 00:19:21.537 "adrfam": "IPv4", 00:19:21.537 "traddr": "10.0.0.1", 00:19:21.537 "trsvcid": "40878" 00:19:21.537 }, 00:19:21.537 "auth": { 00:19:21.537 "state": "completed", 00:19:21.537 "digest": "sha512", 00:19:21.537 "dhgroup": "ffdhe3072" 00:19:21.537 } 00:19:21.537 } 00:19:21.537 ]' 00:19:21.538 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.538 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.538 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.538 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:21.538 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.538 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.538 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.538 10:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.796 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:21.796 10:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:22.728 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.728 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:22.728 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.728 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.728 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.728 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.728 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.728 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:22.985 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:22.986 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:23.551 00:19:23.551 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:23.551 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:23.551 10:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.810 { 00:19:23.810 "cntlid": 115, 00:19:23.810 "qid": 0, 00:19:23.810 "state": "enabled", 00:19:23.810 "thread": "nvmf_tgt_poll_group_000", 00:19:23.810 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:23.810 "listen_address": { 00:19:23.810 "trtype": "TCP", 00:19:23.810 "adrfam": "IPv4", 00:19:23.810 "traddr": "10.0.0.2", 00:19:23.810 "trsvcid": "4420" 00:19:23.810 }, 00:19:23.810 "peer_address": { 00:19:23.810 "trtype": "TCP", 00:19:23.810 "adrfam": "IPv4", 00:19:23.810 "traddr": "10.0.0.1", 00:19:23.810 "trsvcid": "40888" 00:19:23.810 }, 00:19:23.810 "auth": { 00:19:23.810 "state": "completed", 00:19:23.810 "digest": "sha512", 00:19:23.810 "dhgroup": "ffdhe3072" 00:19:23.810 } 00:19:23.810 } 00:19:23.810 ]' 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.810 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.376 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:24.376 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:25.311 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.311 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.311 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.311 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.311 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.311 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.311 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:25.311 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.569 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:25.829 00:19:25.829 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:25.829 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:25.829 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.086 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.086 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.086 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.086 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.086 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.086 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.086 { 00:19:26.086 "cntlid": 117, 00:19:26.086 "qid": 0, 00:19:26.086 "state": "enabled", 00:19:26.086 "thread": "nvmf_tgt_poll_group_000", 00:19:26.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:26.086 "listen_address": { 00:19:26.086 "trtype": "TCP", 00:19:26.086 "adrfam": "IPv4", 00:19:26.086 "traddr": "10.0.0.2", 00:19:26.086 "trsvcid": "4420" 00:19:26.086 }, 00:19:26.086 "peer_address": { 00:19:26.086 "trtype": "TCP", 00:19:26.086 "adrfam": "IPv4", 00:19:26.086 "traddr": "10.0.0.1", 00:19:26.086 "trsvcid": "40924" 00:19:26.086 }, 00:19:26.086 "auth": { 00:19:26.086 "state": "completed", 00:19:26.086 "digest": "sha512", 00:19:26.086 "dhgroup": "ffdhe3072" 00:19:26.086 } 00:19:26.086 } 00:19:26.086 ]' 00:19:26.086 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.086 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.086 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.344 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:26.344 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.344 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.344 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.344 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.601 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:26.601 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:27.532 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.532 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:27.532 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.532 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.532 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.532 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.532 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:27.532 10:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:27.788 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:28.364 00:19:28.364 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:28.364 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.364 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:28.621 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.621 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.621 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.621 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.621 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.621 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.621 { 00:19:28.621 "cntlid": 119, 00:19:28.621 "qid": 0, 00:19:28.621 "state": "enabled", 00:19:28.621 "thread": "nvmf_tgt_poll_group_000", 00:19:28.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:28.621 "listen_address": { 00:19:28.621 "trtype": "TCP", 00:19:28.621 "adrfam": "IPv4", 00:19:28.621 "traddr": "10.0.0.2", 00:19:28.621 "trsvcid": "4420" 00:19:28.621 }, 00:19:28.621 "peer_address": { 00:19:28.621 "trtype": "TCP", 00:19:28.621 "adrfam": "IPv4", 00:19:28.621 "traddr": "10.0.0.1", 00:19:28.621 "trsvcid": "40948" 00:19:28.621 }, 00:19:28.621 "auth": { 00:19:28.621 "state": "completed", 00:19:28.621 "digest": "sha512", 00:19:28.621 "dhgroup": "ffdhe3072" 00:19:28.621 } 00:19:28.621 } 00:19:28.621 ]' 00:19:28.621 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.621 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:28.622 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.622 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.622 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.622 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.622 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.622 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.879 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:28.879 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:29.809 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.809 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:29.809 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.809 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.809 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.809 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:29.809 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.809 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:29.809 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.067 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:30.633 00:19:30.633 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.633 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.633 10:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.891 { 00:19:30.891 "cntlid": 121, 00:19:30.891 "qid": 0, 00:19:30.891 "state": "enabled", 00:19:30.891 "thread": "nvmf_tgt_poll_group_000", 00:19:30.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:30.891 "listen_address": { 00:19:30.891 "trtype": "TCP", 00:19:30.891 "adrfam": "IPv4", 00:19:30.891 "traddr": "10.0.0.2", 00:19:30.891 "trsvcid": "4420" 00:19:30.891 }, 00:19:30.891 "peer_address": { 00:19:30.891 "trtype": "TCP", 00:19:30.891 "adrfam": "IPv4", 00:19:30.891 "traddr": "10.0.0.1", 00:19:30.891 "trsvcid": "56638" 00:19:30.891 }, 00:19:30.891 "auth": { 00:19:30.891 "state": "completed", 00:19:30.891 "digest": "sha512", 00:19:30.891 "dhgroup": "ffdhe4096" 00:19:30.891 } 00:19:30.891 } 00:19:30.891 ]' 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.891 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.149 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:31.149 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:32.084 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.084 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:32.084 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.084 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.084 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.084 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.084 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.084 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.342 10:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:32.906 00:19:32.907 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.907 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.907 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.164 { 00:19:33.164 "cntlid": 123, 00:19:33.164 "qid": 0, 00:19:33.164 "state": "enabled", 00:19:33.164 "thread": "nvmf_tgt_poll_group_000", 00:19:33.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:33.164 "listen_address": { 00:19:33.164 "trtype": "TCP", 00:19:33.164 "adrfam": "IPv4", 00:19:33.164 "traddr": "10.0.0.2", 00:19:33.164 "trsvcid": "4420" 00:19:33.164 }, 00:19:33.164 "peer_address": { 00:19:33.164 "trtype": "TCP", 00:19:33.164 "adrfam": "IPv4", 00:19:33.164 "traddr": "10.0.0.1", 00:19:33.164 "trsvcid": "56666" 00:19:33.164 }, 00:19:33.164 "auth": { 00:19:33.164 "state": "completed", 00:19:33.164 "digest": "sha512", 00:19:33.164 "dhgroup": "ffdhe4096" 00:19:33.164 } 00:19:33.164 } 00:19:33.164 ]' 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.164 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.422 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:33.422 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:34.356 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.356 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:34.356 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.356 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.356 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.356 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.356 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.356 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:34.615 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:34.615 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:34.615 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:34.615 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:34.615 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:34.615 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.615 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.615 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.615 10:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.615 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.615 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.615 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:34.615 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.180 00:19:35.180 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.180 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.180 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.437 { 00:19:35.437 "cntlid": 125, 00:19:35.437 "qid": 0, 00:19:35.437 "state": "enabled", 00:19:35.437 "thread": "nvmf_tgt_poll_group_000", 00:19:35.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:35.437 "listen_address": { 00:19:35.437 "trtype": "TCP", 00:19:35.437 "adrfam": "IPv4", 00:19:35.437 "traddr": "10.0.0.2", 00:19:35.437 "trsvcid": "4420" 00:19:35.437 }, 00:19:35.437 "peer_address": { 00:19:35.437 "trtype": "TCP", 00:19:35.437 "adrfam": "IPv4", 00:19:35.437 "traddr": "10.0.0.1", 00:19:35.437 "trsvcid": "56704" 00:19:35.437 }, 00:19:35.437 "auth": { 00:19:35.437 "state": "completed", 00:19:35.437 "digest": "sha512", 00:19:35.437 "dhgroup": "ffdhe4096" 00:19:35.437 } 00:19:35.437 } 00:19:35.437 ]' 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.437 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.694 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:35.694 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:36.623 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.623 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:36.623 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.623 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.623 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.623 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.623 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.623 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.879 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.442 00:19:37.442 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.442 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.442 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.700 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.700 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.700 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.700 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.700 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.700 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.700 { 00:19:37.700 "cntlid": 127, 00:19:37.700 "qid": 0, 00:19:37.700 "state": "enabled", 00:19:37.700 "thread": "nvmf_tgt_poll_group_000", 00:19:37.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:37.700 "listen_address": { 00:19:37.700 "trtype": "TCP", 00:19:37.700 "adrfam": "IPv4", 00:19:37.700 "traddr": "10.0.0.2", 00:19:37.700 "trsvcid": "4420" 00:19:37.700 }, 00:19:37.700 "peer_address": { 00:19:37.700 "trtype": "TCP", 00:19:37.700 "adrfam": "IPv4", 00:19:37.700 "traddr": "10.0.0.1", 00:19:37.700 "trsvcid": "56724" 00:19:37.700 }, 00:19:37.700 "auth": { 00:19:37.700 "state": "completed", 00:19:37.700 "digest": "sha512", 00:19:37.700 "dhgroup": "ffdhe4096" 00:19:37.700 } 00:19:37.700 } 00:19:37.700 ]' 00:19:37.700 10:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.700 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.700 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.700 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:37.700 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.700 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.700 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.700 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.957 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:37.957 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:38.887 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.887 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.887 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:38.887 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.887 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.887 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.887 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.887 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.887 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:38.887 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.145 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.711 00:19:39.711 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.711 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.711 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.969 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.969 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.969 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.969 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.969 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.969 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.969 { 00:19:39.969 "cntlid": 129, 00:19:39.969 "qid": 0, 00:19:39.969 "state": "enabled", 00:19:39.969 "thread": "nvmf_tgt_poll_group_000", 00:19:39.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:39.969 "listen_address": { 00:19:39.969 "trtype": "TCP", 00:19:39.969 "adrfam": "IPv4", 00:19:39.969 "traddr": "10.0.0.2", 00:19:39.969 "trsvcid": "4420" 00:19:39.969 }, 00:19:39.969 "peer_address": { 00:19:39.969 "trtype": "TCP", 00:19:39.969 "adrfam": "IPv4", 00:19:39.969 "traddr": "10.0.0.1", 00:19:39.969 "trsvcid": "45868" 00:19:39.969 }, 00:19:39.969 "auth": { 00:19:39.969 "state": "completed", 00:19:39.969 "digest": "sha512", 00:19:39.969 "dhgroup": "ffdhe6144" 00:19:39.969 } 00:19:39.969 } 00:19:39.969 ]' 00:19:39.969 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.227 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.227 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.227 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.227 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.227 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.227 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.227 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.485 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:40.485 10:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:41.419 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.419 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:41.419 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.419 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.419 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.419 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.419 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.419 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.678 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.245 00:19:42.245 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.245 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.245 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.503 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.503 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.503 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.503 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.503 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.503 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.503 { 00:19:42.503 "cntlid": 131, 00:19:42.503 "qid": 0, 00:19:42.503 "state": "enabled", 00:19:42.503 "thread": "nvmf_tgt_poll_group_000", 00:19:42.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:42.503 "listen_address": { 00:19:42.503 "trtype": "TCP", 00:19:42.503 "adrfam": "IPv4", 00:19:42.503 "traddr": "10.0.0.2", 00:19:42.503 "trsvcid": "4420" 00:19:42.503 }, 00:19:42.503 "peer_address": { 00:19:42.503 "trtype": "TCP", 00:19:42.503 "adrfam": "IPv4", 00:19:42.503 "traddr": "10.0.0.1", 00:19:42.503 "trsvcid": "45888" 00:19:42.503 }, 00:19:42.503 "auth": { 00:19:42.503 "state": "completed", 00:19:42.503 "digest": "sha512", 00:19:42.503 "dhgroup": "ffdhe6144" 00:19:42.503 } 00:19:42.503 } 00:19:42.503 ]' 00:19:42.503 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.503 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.503 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.503 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:42.503 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.761 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.761 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.761 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.049 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:43.049 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:43.664 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.923 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.923 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:43.923 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.923 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.923 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.923 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.923 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:43.923 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.181 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.182 10:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.748 00:19:44.748 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.748 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.748 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.005 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.005 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.005 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.005 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.006 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.006 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.006 { 00:19:45.006 "cntlid": 133, 00:19:45.006 "qid": 0, 00:19:45.006 "state": "enabled", 00:19:45.006 "thread": "nvmf_tgt_poll_group_000", 00:19:45.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:45.006 "listen_address": { 00:19:45.006 "trtype": "TCP", 00:19:45.006 "adrfam": "IPv4", 00:19:45.006 "traddr": "10.0.0.2", 00:19:45.006 "trsvcid": "4420" 00:19:45.006 }, 00:19:45.006 "peer_address": { 00:19:45.006 "trtype": "TCP", 00:19:45.006 "adrfam": "IPv4", 00:19:45.006 "traddr": "10.0.0.1", 00:19:45.006 "trsvcid": "45922" 00:19:45.006 }, 00:19:45.006 "auth": { 00:19:45.006 "state": "completed", 00:19:45.006 "digest": "sha512", 00:19:45.006 "dhgroup": "ffdhe6144" 00:19:45.006 } 00:19:45.006 } 00:19:45.006 ]' 00:19:45.006 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.006 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.006 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.006 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:45.006 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.006 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.006 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.006 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.571 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:45.571 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:46.506 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:47.071 00:19:47.071 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.071 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.071 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.329 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.329 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.329 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.329 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.329 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.329 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.329 { 00:19:47.329 "cntlid": 135, 00:19:47.329 "qid": 0, 00:19:47.329 "state": "enabled", 00:19:47.329 "thread": "nvmf_tgt_poll_group_000", 00:19:47.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:47.329 "listen_address": { 00:19:47.329 "trtype": "TCP", 00:19:47.329 "adrfam": "IPv4", 00:19:47.329 "traddr": "10.0.0.2", 00:19:47.329 "trsvcid": "4420" 00:19:47.329 }, 00:19:47.329 "peer_address": { 00:19:47.329 "trtype": "TCP", 00:19:47.329 "adrfam": "IPv4", 00:19:47.329 "traddr": "10.0.0.1", 00:19:47.329 "trsvcid": "45946" 00:19:47.329 }, 00:19:47.329 "auth": { 00:19:47.329 "state": "completed", 00:19:47.329 "digest": "sha512", 00:19:47.329 "dhgroup": "ffdhe6144" 00:19:47.329 } 00:19:47.329 } 00:19:47.329 ]' 00:19:47.329 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.329 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.329 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.586 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:47.586 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.586 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.586 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.586 10:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.843 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:47.843 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:48.776 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.776 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:48.776 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.776 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.776 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.776 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.776 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.776 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:48.776 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.033 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.965 00:19:49.965 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.965 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.965 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.222 { 00:19:50.222 "cntlid": 137, 00:19:50.222 "qid": 0, 00:19:50.222 "state": "enabled", 00:19:50.222 "thread": "nvmf_tgt_poll_group_000", 00:19:50.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:50.222 "listen_address": { 00:19:50.222 "trtype": "TCP", 00:19:50.222 "adrfam": "IPv4", 00:19:50.222 "traddr": "10.0.0.2", 00:19:50.222 "trsvcid": "4420" 00:19:50.222 }, 00:19:50.222 "peer_address": { 00:19:50.222 "trtype": "TCP", 00:19:50.222 "adrfam": "IPv4", 00:19:50.222 "traddr": "10.0.0.1", 00:19:50.222 "trsvcid": "59706" 00:19:50.222 }, 00:19:50.222 "auth": { 00:19:50.222 "state": "completed", 00:19:50.222 "digest": "sha512", 00:19:50.222 "dhgroup": "ffdhe8192" 00:19:50.222 } 00:19:50.222 } 00:19:50.222 ]' 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.222 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.478 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:50.478 10:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:19:51.410 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.410 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:51.410 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.410 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.410 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.410 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.410 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:51.410 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:51.668 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:51.668 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.668 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:51.668 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:51.668 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:51.668 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.668 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.668 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.668 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.668 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.668 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.669 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.669 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.603 00:19:52.603 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:52.603 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:52.603 10:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.862 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.862 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.862 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.862 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.862 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.862 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:52.862 { 00:19:52.862 "cntlid": 139, 00:19:52.862 "qid": 0, 00:19:52.862 "state": "enabled", 00:19:52.862 "thread": "nvmf_tgt_poll_group_000", 00:19:52.862 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:52.862 "listen_address": { 00:19:52.862 "trtype": "TCP", 00:19:52.862 "adrfam": "IPv4", 00:19:52.862 "traddr": "10.0.0.2", 00:19:52.862 "trsvcid": "4420" 00:19:52.862 }, 00:19:52.862 "peer_address": { 00:19:52.862 "trtype": "TCP", 00:19:52.862 "adrfam": "IPv4", 00:19:52.862 "traddr": "10.0.0.1", 00:19:52.862 "trsvcid": "59732" 00:19:52.862 }, 00:19:52.862 "auth": { 00:19:52.862 "state": "completed", 00:19:52.862 "digest": "sha512", 00:19:52.862 "dhgroup": "ffdhe8192" 00:19:52.862 } 00:19:52.862 } 00:19:52.862 ]' 00:19:52.862 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:52.862 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.862 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.862 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.862 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.119 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.119 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.119 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.377 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:53.377 10:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: --dhchap-ctrl-secret DHHC-1:02:YWMwNzVmOWMyMjQ1NzgxNDI2MzNiMzBmZjllMDFjODljYTEwNzUxNTA5ODUzYzQzxhZK7g==: 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.311 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.311 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.568 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.568 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.568 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.568 10:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.501 00:19:55.501 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.501 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.501 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.501 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.501 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.501 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.501 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.501 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.501 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.501 { 00:19:55.501 "cntlid": 141, 00:19:55.501 "qid": 0, 00:19:55.501 "state": "enabled", 00:19:55.501 "thread": "nvmf_tgt_poll_group_000", 00:19:55.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:55.501 "listen_address": { 00:19:55.501 "trtype": "TCP", 00:19:55.501 "adrfam": "IPv4", 00:19:55.501 "traddr": "10.0.0.2", 00:19:55.501 "trsvcid": "4420" 00:19:55.501 }, 00:19:55.501 "peer_address": { 00:19:55.501 "trtype": "TCP", 00:19:55.501 "adrfam": "IPv4", 00:19:55.501 "traddr": "10.0.0.1", 00:19:55.501 "trsvcid": "59748" 00:19:55.501 }, 00:19:55.501 "auth": { 00:19:55.501 "state": "completed", 00:19:55.501 "digest": "sha512", 00:19:55.501 "dhgroup": "ffdhe8192" 00:19:55.501 } 00:19:55.501 } 00:19:55.501 ]' 00:19:55.501 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.759 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.759 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.759 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:55.759 10:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.759 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.759 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.759 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.016 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:56.016 10:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:01:OThjZTA1MzJjZGJiZTcxZThiZmFhZTBhNGIyNWVhNDL68keN: 00:19:56.948 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.948 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:56.948 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.948 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.948 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.948 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.948 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:56.948 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:57.206 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.141 00:19:58.141 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.141 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.141 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.141 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.141 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.141 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.141 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.399 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.399 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.399 { 00:19:58.399 "cntlid": 143, 00:19:58.399 "qid": 0, 00:19:58.399 "state": "enabled", 00:19:58.399 "thread": "nvmf_tgt_poll_group_000", 00:19:58.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:58.400 "listen_address": { 00:19:58.400 "trtype": "TCP", 00:19:58.400 "adrfam": "IPv4", 00:19:58.400 "traddr": "10.0.0.2", 00:19:58.400 "trsvcid": "4420" 00:19:58.400 }, 00:19:58.400 "peer_address": { 00:19:58.400 "trtype": "TCP", 00:19:58.400 "adrfam": "IPv4", 00:19:58.400 "traddr": "10.0.0.1", 00:19:58.400 "trsvcid": "59772" 00:19:58.400 }, 00:19:58.400 "auth": { 00:19:58.400 "state": "completed", 00:19:58.400 "digest": "sha512", 00:19:58.400 "dhgroup": "ffdhe8192" 00:19:58.400 } 00:19:58.400 } 00:19:58.400 ]' 00:19:58.400 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.400 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.400 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.400 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.400 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.400 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.400 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.400 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.658 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:58.658 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:19:59.593 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.593 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:59.593 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.593 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.593 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.593 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:59.593 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:59.593 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:59.593 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:59.593 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:59.593 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.851 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.785 00:20:00.785 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.785 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.785 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.043 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.043 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.043 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.043 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.043 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.043 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.043 { 00:20:01.043 "cntlid": 145, 00:20:01.043 "qid": 0, 00:20:01.043 "state": "enabled", 00:20:01.043 "thread": "nvmf_tgt_poll_group_000", 00:20:01.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:01.043 "listen_address": { 00:20:01.043 "trtype": "TCP", 00:20:01.043 "adrfam": "IPv4", 00:20:01.043 "traddr": "10.0.0.2", 00:20:01.043 "trsvcid": "4420" 00:20:01.043 }, 00:20:01.043 "peer_address": { 00:20:01.043 "trtype": "TCP", 00:20:01.043 "adrfam": "IPv4", 00:20:01.043 "traddr": "10.0.0.1", 00:20:01.043 "trsvcid": "53766" 00:20:01.043 }, 00:20:01.043 "auth": { 00:20:01.043 "state": "completed", 00:20:01.043 "digest": "sha512", 00:20:01.043 "dhgroup": "ffdhe8192" 00:20:01.043 } 00:20:01.043 } 00:20:01.043 ]' 00:20:01.043 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.043 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.043 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.043 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:01.043 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.302 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.302 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.302 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.560 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:20:01.560 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:ZjcyYzI3OWI2ZjZhZDRlODRiZjk0NzRmMDJmYTNkMzIxMmI1ZjY0ZjMwYzJhNzhkv5vT0Q==: --dhchap-ctrl-secret DHHC-1:03:OGNhZTY3MDFhZmEzYTk1NmE5ODRmN2Q3YjU5NTNjMGZlMjlmMmQ0YzYzM2I3ODVhZDgyNDJjNWYwMzY5MjdmOGzKwOU=: 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:02.493 10:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:03.057 request: 00:20:03.057 { 00:20:03.057 "name": "nvme0", 00:20:03.057 "trtype": "tcp", 00:20:03.057 "traddr": "10.0.0.2", 00:20:03.057 "adrfam": "ipv4", 00:20:03.057 "trsvcid": "4420", 00:20:03.057 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:03.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:03.057 "prchk_reftag": false, 00:20:03.057 "prchk_guard": false, 00:20:03.057 "hdgst": false, 00:20:03.057 "ddgst": false, 00:20:03.057 "dhchap_key": "key2", 00:20:03.057 "allow_unrecognized_csi": false, 00:20:03.057 "method": "bdev_nvme_attach_controller", 00:20:03.057 "req_id": 1 00:20:03.057 } 00:20:03.057 Got JSON-RPC error response 00:20:03.057 response: 00:20:03.057 { 00:20:03.057 "code": -5, 00:20:03.057 "message": "Input/output error" 00:20:03.057 } 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:03.057 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:03.988 request: 00:20:03.988 { 00:20:03.988 "name": "nvme0", 00:20:03.988 "trtype": "tcp", 00:20:03.988 "traddr": "10.0.0.2", 00:20:03.988 "adrfam": "ipv4", 00:20:03.988 "trsvcid": "4420", 00:20:03.988 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:03.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:03.988 "prchk_reftag": false, 00:20:03.988 "prchk_guard": false, 00:20:03.988 "hdgst": false, 00:20:03.988 "ddgst": false, 00:20:03.988 "dhchap_key": "key1", 00:20:03.988 "dhchap_ctrlr_key": "ckey2", 00:20:03.988 "allow_unrecognized_csi": false, 00:20:03.988 "method": "bdev_nvme_attach_controller", 00:20:03.988 "req_id": 1 00:20:03.988 } 00:20:03.988 Got JSON-RPC error response 00:20:03.988 response: 00:20:03.988 { 00:20:03.988 "code": -5, 00:20:03.988 "message": "Input/output error" 00:20:03.988 } 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.988 10:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.922 request: 00:20:04.922 { 00:20:04.922 "name": "nvme0", 00:20:04.922 "trtype": "tcp", 00:20:04.922 "traddr": "10.0.0.2", 00:20:04.922 "adrfam": "ipv4", 00:20:04.922 "trsvcid": "4420", 00:20:04.922 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:04.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:04.922 "prchk_reftag": false, 00:20:04.922 "prchk_guard": false, 00:20:04.922 "hdgst": false, 00:20:04.922 "ddgst": false, 00:20:04.922 "dhchap_key": "key1", 00:20:04.922 "dhchap_ctrlr_key": "ckey1", 00:20:04.922 "allow_unrecognized_csi": false, 00:20:04.922 "method": "bdev_nvme_attach_controller", 00:20:04.922 "req_id": 1 00:20:04.922 } 00:20:04.922 Got JSON-RPC error response 00:20:04.922 response: 00:20:04.922 { 00:20:04.922 "code": -5, 00:20:04.922 "message": "Input/output error" 00:20:04.922 } 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2530271 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2530271 ']' 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2530271 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2530271 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2530271' 00:20:04.922 killing process with pid 2530271 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2530271 00:20:04.922 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2530271 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2552953 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2552953 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2552953 ']' 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.180 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2552953 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2552953 ']' 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.437 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.438 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.438 10:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.695 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:05.695 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:05.695 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:05.695 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.695 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.695 null0 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iED 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.CQW ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.CQW 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Rwi 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.BuA ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.BuA 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Omv 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.RtE ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RtE 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.8EA 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:05.952 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.953 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.953 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.953 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:05.953 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.953 10:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:07.335 nvme0n1 00:20:07.335 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.335 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.335 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.593 { 00:20:07.593 "cntlid": 1, 00:20:07.593 "qid": 0, 00:20:07.593 "state": "enabled", 00:20:07.593 "thread": "nvmf_tgt_poll_group_000", 00:20:07.593 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:07.593 "listen_address": { 00:20:07.593 "trtype": "TCP", 00:20:07.593 "adrfam": "IPv4", 00:20:07.593 "traddr": "10.0.0.2", 00:20:07.593 "trsvcid": "4420" 00:20:07.593 }, 00:20:07.593 "peer_address": { 00:20:07.593 "trtype": "TCP", 00:20:07.593 "adrfam": "IPv4", 00:20:07.593 "traddr": "10.0.0.1", 00:20:07.593 "trsvcid": "53826" 00:20:07.593 }, 00:20:07.593 "auth": { 00:20:07.593 "state": "completed", 00:20:07.593 "digest": "sha512", 00:20:07.593 "dhgroup": "ffdhe8192" 00:20:07.593 } 00:20:07.593 } 00:20:07.593 ]' 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.593 10:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.157 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:20:08.157 10:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.090 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.657 request: 00:20:09.657 { 00:20:09.657 "name": "nvme0", 00:20:09.657 "trtype": "tcp", 00:20:09.657 "traddr": "10.0.0.2", 00:20:09.657 "adrfam": "ipv4", 00:20:09.657 "trsvcid": "4420", 00:20:09.657 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:09.657 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:09.657 "prchk_reftag": false, 00:20:09.657 "prchk_guard": false, 00:20:09.657 "hdgst": false, 00:20:09.657 "ddgst": false, 00:20:09.657 "dhchap_key": "key3", 00:20:09.657 "allow_unrecognized_csi": false, 00:20:09.657 "method": "bdev_nvme_attach_controller", 00:20:09.657 "req_id": 1 00:20:09.657 } 00:20:09.657 Got JSON-RPC error response 00:20:09.657 response: 00:20:09.657 { 00:20:09.657 "code": -5, 00:20:09.657 "message": "Input/output error" 00:20:09.657 } 00:20:09.657 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:09.657 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:09.657 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:09.657 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:09.657 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:09.657 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:09.657 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:09.657 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:09.914 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:09.914 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:09.914 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:09.914 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:09.914 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.914 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:09.914 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:09.914 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.914 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.914 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:10.172 request: 00:20:10.172 { 00:20:10.172 "name": "nvme0", 00:20:10.172 "trtype": "tcp", 00:20:10.172 "traddr": "10.0.0.2", 00:20:10.172 "adrfam": "ipv4", 00:20:10.172 "trsvcid": "4420", 00:20:10.172 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:10.172 "prchk_reftag": false, 00:20:10.172 "prchk_guard": false, 00:20:10.172 "hdgst": false, 00:20:10.172 "ddgst": false, 00:20:10.172 "dhchap_key": "key3", 00:20:10.172 "allow_unrecognized_csi": false, 00:20:10.172 "method": "bdev_nvme_attach_controller", 00:20:10.172 "req_id": 1 00:20:10.172 } 00:20:10.172 Got JSON-RPC error response 00:20:10.172 response: 00:20:10.172 { 00:20:10.172 "code": -5, 00:20:10.172 "message": "Input/output error" 00:20:10.172 } 00:20:10.172 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:10.172 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:10.172 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:10.172 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:10.172 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:10.172 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:10.172 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:10.172 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.172 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.172 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.430 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:10.995 request: 00:20:10.995 { 00:20:10.995 "name": "nvme0", 00:20:10.995 "trtype": "tcp", 00:20:10.995 "traddr": "10.0.0.2", 00:20:10.995 "adrfam": "ipv4", 00:20:10.995 "trsvcid": "4420", 00:20:10.995 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:10.995 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:10.995 "prchk_reftag": false, 00:20:10.995 "prchk_guard": false, 00:20:10.995 "hdgst": false, 00:20:10.995 "ddgst": false, 00:20:10.995 "dhchap_key": "key0", 00:20:10.995 "dhchap_ctrlr_key": "key1", 00:20:10.995 "allow_unrecognized_csi": false, 00:20:10.995 "method": "bdev_nvme_attach_controller", 00:20:10.995 "req_id": 1 00:20:10.995 } 00:20:10.995 Got JSON-RPC error response 00:20:10.995 response: 00:20:10.995 { 00:20:10.995 "code": -5, 00:20:10.995 "message": "Input/output error" 00:20:10.995 } 00:20:10.995 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:10.995 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:10.995 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:10.995 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:10.995 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:10.995 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:10.995 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:11.253 nvme0n1 00:20:11.253 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:11.253 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.253 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:11.511 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.511 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.511 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.783 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:11.783 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.783 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.783 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.783 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:11.783 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:11.783 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:13.159 nvme0n1 00:20:13.159 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:13.159 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:13.159 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.416 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.416 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:13.416 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.416 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.416 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.416 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:13.416 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:13.416 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.674 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.674 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:20:13.674 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: --dhchap-ctrl-secret DHHC-1:03:MmJmMDUyNWVmMzI1NDRjNWI2NjIzYWY4YTEwOThlYmIwMjViNTQwY2M2YzQ3OWUxZjAwYjk0MDVhM2ZjZjc4NSq+Slk=: 00:20:14.713 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:14.713 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:14.713 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:14.713 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:14.713 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:14.713 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:14.713 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:14.713 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.713 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.997 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:14.997 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:14.997 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:14.997 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:14.997 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.997 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:14.997 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.997 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:14.998 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:14.998 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:15.931 request: 00:20:15.931 { 00:20:15.931 "name": "nvme0", 00:20:15.931 "trtype": "tcp", 00:20:15.931 "traddr": "10.0.0.2", 00:20:15.931 "adrfam": "ipv4", 00:20:15.931 "trsvcid": "4420", 00:20:15.931 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:15.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:15.931 "prchk_reftag": false, 00:20:15.931 "prchk_guard": false, 00:20:15.931 "hdgst": false, 00:20:15.931 "ddgst": false, 00:20:15.931 "dhchap_key": "key1", 00:20:15.931 "allow_unrecognized_csi": false, 00:20:15.931 "method": "bdev_nvme_attach_controller", 00:20:15.931 "req_id": 1 00:20:15.931 } 00:20:15.931 Got JSON-RPC error response 00:20:15.931 response: 00:20:15.931 { 00:20:15.931 "code": -5, 00:20:15.931 "message": "Input/output error" 00:20:15.931 } 00:20:15.931 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:15.931 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.931 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.931 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.931 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:15.931 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:15.931 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:17.304 nvme0n1 00:20:17.304 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:17.304 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:17.304 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.304 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.304 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.304 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.561 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:17.561 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.561 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.819 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.819 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:17.819 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:17.819 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:18.076 nvme0n1 00:20:18.076 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:18.076 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.076 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:18.339 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.339 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.339 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: '' 2s 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: ]] 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MDk0NTA0ODZjYjc1MGM1MmE3YzlhYzkxYzFmZWVlYmRGlSIn: 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:18.596 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:21.121 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:21.121 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:21.121 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:21.121 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:21.121 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:21.121 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: 2s 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: ]] 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTFhZjdlZTc0M2FkNjExMWRmODk1MGQ0YzZkOGYyMWMyMGZkNzgyZjUyMzQxZWFhp3P99g==: 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:21.122 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:23.020 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:23.020 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:23.020 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:23.020 10:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:23.020 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:23.020 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:23.020 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:23.020 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.021 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:23.021 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.021 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.021 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.021 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:23.021 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:23.021 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:24.389 nvme0n1 00:20:24.389 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:24.389 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.389 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.389 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.389 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:24.389 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:24.954 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:24.954 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:24.954 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.211 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.211 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:25.211 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.211 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.211 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.211 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:25.211 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:25.468 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:25.468 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:25.468 10:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:26.042 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:26.605 request: 00:20:26.605 { 00:20:26.605 "name": "nvme0", 00:20:26.605 "dhchap_key": "key1", 00:20:26.605 "dhchap_ctrlr_key": "key3", 00:20:26.605 "method": "bdev_nvme_set_keys", 00:20:26.605 "req_id": 1 00:20:26.605 } 00:20:26.605 Got JSON-RPC error response 00:20:26.605 response: 00:20:26.605 { 00:20:26.605 "code": -13, 00:20:26.605 "message": "Permission denied" 00:20:26.605 } 00:20:26.605 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:26.605 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.605 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:26.605 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.606 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:26.606 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:26.606 10:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.862 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:26.862 10:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:28.233 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:28.233 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:28.233 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.233 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:28.233 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:28.233 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.233 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.233 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.233 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:28.233 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:28.233 10:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:29.606 nvme0n1 00:20:29.606 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:29.607 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:30.540 request: 00:20:30.540 { 00:20:30.540 "name": "nvme0", 00:20:30.540 "dhchap_key": "key2", 00:20:30.540 "dhchap_ctrlr_key": "key0", 00:20:30.540 "method": "bdev_nvme_set_keys", 00:20:30.540 "req_id": 1 00:20:30.540 } 00:20:30.540 Got JSON-RPC error response 00:20:30.540 response: 00:20:30.540 { 00:20:30.540 "code": -13, 00:20:30.540 "message": "Permission denied" 00:20:30.540 } 00:20:30.540 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:30.540 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:30.540 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:30.540 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:30.540 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:30.540 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:30.540 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.799 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:30.799 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:31.732 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:31.732 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:31.732 10:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2530292 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2530292 ']' 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2530292 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2530292 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2530292' 00:20:31.990 killing process with pid 2530292 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2530292 00:20:31.990 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2530292 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:32.556 rmmod nvme_tcp 00:20:32.556 rmmod nvme_fabrics 00:20:32.556 rmmod nvme_keyring 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2552953 ']' 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2552953 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2552953 ']' 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2552953 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2552953 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2552953' 00:20:32.556 killing process with pid 2552953 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2552953 00:20:32.556 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2552953 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.824 10:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.766 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:34.766 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.iED /tmp/spdk.key-sha256.Rwi /tmp/spdk.key-sha384.Omv /tmp/spdk.key-sha512.8EA /tmp/spdk.key-sha512.CQW /tmp/spdk.key-sha384.BuA /tmp/spdk.key-sha256.RtE '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:34.766 00:20:34.766 real 3m30.365s 00:20:34.766 user 8m13.854s 00:20:34.766 sys 0m27.962s 00:20:34.766 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:34.766 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.766 ************************************ 00:20:34.766 END TEST nvmf_auth_target 00:20:34.766 ************************************ 00:20:34.766 10:31:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:34.766 10:31:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:34.766 10:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:34.766 10:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:34.766 10:31:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:34.766 ************************************ 00:20:34.766 START TEST nvmf_bdevio_no_huge 00:20:34.766 ************************************ 00:20:34.766 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:35.025 * Looking for test storage... 00:20:35.025 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:35.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.025 --rc genhtml_branch_coverage=1 00:20:35.025 --rc genhtml_function_coverage=1 00:20:35.025 --rc genhtml_legend=1 00:20:35.025 --rc geninfo_all_blocks=1 00:20:35.025 --rc geninfo_unexecuted_blocks=1 00:20:35.025 00:20:35.025 ' 00:20:35.025 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:35.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.026 --rc genhtml_branch_coverage=1 00:20:35.026 --rc genhtml_function_coverage=1 00:20:35.026 --rc genhtml_legend=1 00:20:35.026 --rc geninfo_all_blocks=1 00:20:35.026 --rc geninfo_unexecuted_blocks=1 00:20:35.026 00:20:35.026 ' 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:35.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.026 --rc genhtml_branch_coverage=1 00:20:35.026 --rc genhtml_function_coverage=1 00:20:35.026 --rc genhtml_legend=1 00:20:35.026 --rc geninfo_all_blocks=1 00:20:35.026 --rc geninfo_unexecuted_blocks=1 00:20:35.026 00:20:35.026 ' 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:35.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:35.026 --rc genhtml_branch_coverage=1 00:20:35.026 --rc genhtml_function_coverage=1 00:20:35.026 --rc genhtml_legend=1 00:20:35.026 --rc geninfo_all_blocks=1 00:20:35.026 --rc geninfo_unexecuted_blocks=1 00:20:35.026 00:20:35.026 ' 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:35.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:20:35.026 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:37.556 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:37.556 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:37.556 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:37.557 Found net devices under 0000:09:00.0: cvl_0_0 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:37.557 Found net devices under 0000:09:00.1: cvl_0_1 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:37.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:20:37.557 00:20:37.557 --- 10.0.0.2 ping statistics --- 00:20:37.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.557 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:37.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:20:37.557 00:20:37.557 --- 10.0.0.1 ping statistics --- 00:20:37.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.557 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2558204 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2558204 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2558204 ']' 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.557 10:31:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.557 [2024-12-09 10:31:09.786561] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:20:37.557 [2024-12-09 10:31:09.786638] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:37.557 [2024-12-09 10:31:09.868042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.557 [2024-12-09 10:31:09.928337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.557 [2024-12-09 10:31:09.928392] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.557 [2024-12-09 10:31:09.928414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.557 [2024-12-09 10:31:09.928425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.557 [2024-12-09 10:31:09.928434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.557 [2024-12-09 10:31:09.929460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:37.557 [2024-12-09 10:31:09.929532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:37.557 [2024-12-09 10:31:09.929580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:37.557 [2024-12-09 10:31:09.929583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.816 [2024-12-09 10:31:10.089459] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.816 Malloc0 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.816 [2024-12-09 10:31:10.128273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:37.816 { 00:20:37.816 "params": { 00:20:37.816 "name": "Nvme$subsystem", 00:20:37.816 "trtype": "$TEST_TRANSPORT", 00:20:37.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.816 "adrfam": "ipv4", 00:20:37.816 "trsvcid": "$NVMF_PORT", 00:20:37.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.816 "hdgst": ${hdgst:-false}, 00:20:37.816 "ddgst": ${ddgst:-false} 00:20:37.816 }, 00:20:37.816 "method": "bdev_nvme_attach_controller" 00:20:37.816 } 00:20:37.816 EOF 00:20:37.816 )") 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:20:37.816 10:31:10 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:37.816 "params": { 00:20:37.816 "name": "Nvme1", 00:20:37.816 "trtype": "tcp", 00:20:37.816 "traddr": "10.0.0.2", 00:20:37.816 "adrfam": "ipv4", 00:20:37.816 "trsvcid": "4420", 00:20:37.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.816 "hdgst": false, 00:20:37.816 "ddgst": false 00:20:37.816 }, 00:20:37.816 "method": "bdev_nvme_attach_controller" 00:20:37.816 }' 00:20:37.816 [2024-12-09 10:31:10.178687] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:20:37.816 [2024-12-09 10:31:10.178775] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --legacy-mem --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2558239 ] 00:20:37.816 [2024-12-09 10:31:10.251631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:38.074 [2024-12-09 10:31:10.318688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.074 [2024-12-09 10:31:10.318738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.074 [2024-12-09 10:31:10.318741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.332 I/O targets: 00:20:38.332 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:38.332 00:20:38.332 00:20:38.332 CUnit - A unit testing framework for C - Version 2.1-3 00:20:38.332 http://cunit.sourceforge.net/ 00:20:38.332 00:20:38.332 00:20:38.332 Suite: bdevio tests on: Nvme1n1 00:20:38.332 Test: blockdev write read block ...passed 00:20:38.332 Test: blockdev write zeroes read block ...passed 00:20:38.332 Test: blockdev write zeroes read no split ...passed 00:20:38.332 Test: blockdev write zeroes read split ...passed 00:20:38.332 Test: blockdev write zeroes read split partial ...passed 00:20:38.332 Test: blockdev reset ...[2024-12-09 10:31:10.671665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:38.332 [2024-12-09 10:31:10.671780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14780e0 (9): Bad file descriptor 00:20:38.332 [2024-12-09 10:31:10.691939] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:38.332 passed 00:20:38.332 Test: blockdev write read 8 blocks ...passed 00:20:38.332 Test: blockdev write read size > 128k ...passed 00:20:38.332 Test: blockdev write read invalid size ...passed 00:20:38.590 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:38.590 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:38.590 Test: blockdev write read max offset ...passed 00:20:38.590 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:38.590 Test: blockdev writev readv 8 blocks ...passed 00:20:38.590 Test: blockdev writev readv 30 x 1block ...passed 00:20:38.590 Test: blockdev writev readv block ...passed 00:20:38.590 Test: blockdev writev readv size > 128k ...passed 00:20:38.590 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:38.590 Test: blockdev comparev and writev ...[2024-12-09 10:31:10.948945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.590 [2024-12-09 10:31:10.948981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:38.590 [2024-12-09 10:31:10.949007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.590 [2024-12-09 10:31:10.949025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.590 [2024-12-09 10:31:10.949338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.590 [2024-12-09 10:31:10.949365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:38.590 [2024-12-09 10:31:10.949388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.590 [2024-12-09 10:31:10.949411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:38.590 [2024-12-09 10:31:10.949746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.590 [2024-12-09 10:31:10.949770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:38.590 [2024-12-09 10:31:10.949792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.590 [2024-12-09 10:31:10.949808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:38.590 [2024-12-09 10:31:10.950164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.590 [2024-12-09 10:31:10.950189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:38.590 [2024-12-09 10:31:10.950211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.590 [2024-12-09 10:31:10.950228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:38.590 passed 00:20:38.849 Test: blockdev nvme passthru rw ...passed 00:20:38.849 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:31:11.032401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.849 [2024-12-09 10:31:11.032429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:38.849 [2024-12-09 10:31:11.032572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.849 [2024-12-09 10:31:11.032596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:38.849 [2024-12-09 10:31:11.032736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.849 [2024-12-09 10:31:11.032760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:38.849 [2024-12-09 10:31:11.032897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.849 [2024-12-09 10:31:11.032921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:38.849 passed 00:20:38.849 Test: blockdev nvme admin passthru ...passed 00:20:38.849 Test: blockdev copy ...passed 00:20:38.849 00:20:38.849 Run Summary: Type Total Ran Passed Failed Inactive 00:20:38.849 suites 1 1 n/a 0 0 00:20:38.849 tests 23 23 23 0 0 00:20:38.849 asserts 152 152 152 0 n/a 00:20:38.849 00:20:38.849 Elapsed time = 1.082 seconds 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.107 rmmod nvme_tcp 00:20:39.107 rmmod nvme_fabrics 00:20:39.107 rmmod nvme_keyring 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2558204 ']' 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2558204 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2558204 ']' 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2558204 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2558204 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:39.107 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2558204' 00:20:39.107 killing process with pid 2558204 00:20:39.364 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2558204 00:20:39.364 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2558204 00:20:39.624 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.625 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.625 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.625 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:20:39.625 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:20:39.625 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.625 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.625 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.625 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.625 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.625 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.625 10:31:11 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.160 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:42.160 00:20:42.160 real 0m6.793s 00:20:42.160 user 0m10.736s 00:20:42.160 sys 0m2.680s 00:20:42.160 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.160 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:42.160 ************************************ 00:20:42.160 END TEST nvmf_bdevio_no_huge 00:20:42.160 ************************************ 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:42.160 ************************************ 00:20:42.160 START TEST nvmf_tls 00:20:42.160 ************************************ 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:42.160 * Looking for test storage... 00:20:42.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:42.160 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:42.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.161 --rc genhtml_branch_coverage=1 00:20:42.161 --rc genhtml_function_coverage=1 00:20:42.161 --rc genhtml_legend=1 00:20:42.161 --rc geninfo_all_blocks=1 00:20:42.161 --rc geninfo_unexecuted_blocks=1 00:20:42.161 00:20:42.161 ' 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:42.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.161 --rc genhtml_branch_coverage=1 00:20:42.161 --rc genhtml_function_coverage=1 00:20:42.161 --rc genhtml_legend=1 00:20:42.161 --rc geninfo_all_blocks=1 00:20:42.161 --rc geninfo_unexecuted_blocks=1 00:20:42.161 00:20:42.161 ' 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:42.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.161 --rc genhtml_branch_coverage=1 00:20:42.161 --rc genhtml_function_coverage=1 00:20:42.161 --rc genhtml_legend=1 00:20:42.161 --rc geninfo_all_blocks=1 00:20:42.161 --rc geninfo_unexecuted_blocks=1 00:20:42.161 00:20:42.161 ' 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:42.161 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.161 --rc genhtml_branch_coverage=1 00:20:42.161 --rc genhtml_function_coverage=1 00:20:42.161 --rc genhtml_legend=1 00:20:42.161 --rc geninfo_all_blocks=1 00:20:42.161 --rc geninfo_unexecuted_blocks=1 00:20:42.161 00:20:42.161 ' 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:42.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:20:42.161 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:20:44.077 Found 0000:09:00.0 (0x8086 - 0x159b) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:20:44.077 Found 0000:09:00.1 (0x8086 - 0x159b) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:20:44.077 Found net devices under 0000:09:00.0: cvl_0_0 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:20:44.077 Found net devices under 0000:09:00.1: cvl_0_1 00:20:44.077 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:44.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:20:44.078 00:20:44.078 --- 10.0.0.2 ping statistics --- 00:20:44.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.078 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:44.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:44.078 00:20:44.078 --- 10.0.0.1 ping statistics --- 00:20:44.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.078 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2560437 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2560437 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2560437 ']' 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.078 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.335 [2024-12-09 10:31:16.538614] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:20:44.335 [2024-12-09 10:31:16.538689] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.335 [2024-12-09 10:31:16.610620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.335 [2024-12-09 10:31:16.669871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.335 [2024-12-09 10:31:16.669922] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.335 [2024-12-09 10:31:16.669951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.335 [2024-12-09 10:31:16.669962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.335 [2024-12-09 10:31:16.669971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.335 [2024-12-09 10:31:16.670602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.335 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.335 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:44.335 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.335 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.335 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.593 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.593 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:44.593 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:44.851 true 00:20:44.851 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:44.851 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:45.109 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:45.109 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:45.109 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:45.366 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:45.366 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:45.622 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:45.622 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:45.622 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:45.879 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:45.879 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:46.135 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:46.135 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:46.135 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:46.135 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:46.392 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:46.392 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:46.392 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:46.650 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:46.650 10:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:46.908 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:46.908 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:46.908 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:47.165 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:47.165 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:47.424 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.jgljIHTMNK 00:20:47.425 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:47.425 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Z2o7F0zOK7 00:20:47.425 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:47.425 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:47.425 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.jgljIHTMNK 00:20:47.425 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Z2o7F0zOK7 00:20:47.425 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:47.994 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:48.252 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.jgljIHTMNK 00:20:48.252 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jgljIHTMNK 00:20:48.252 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:48.512 [2024-12-09 10:31:20.746972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.512 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:48.800 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:49.063 [2024-12-09 10:31:21.292482] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:49.063 [2024-12-09 10:31:21.292753] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.063 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:49.320 malloc0 00:20:49.320 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:49.576 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jgljIHTMNK 00:20:49.833 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:50.091 10:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.jgljIHTMNK 00:21:02.336 Initializing NVMe Controllers 00:21:02.336 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.336 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.336 Initialization complete. Launching workers. 00:21:02.336 ======================================================== 00:21:02.336 Latency(us) 00:21:02.336 Device Information : IOPS MiB/s Average min max 00:21:02.336 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8661.00 33.83 7391.56 1134.80 8653.81 00:21:02.336 ======================================================== 00:21:02.336 Total : 8661.00 33.83 7391.56 1134.80 8653.81 00:21:02.336 00:21:02.336 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jgljIHTMNK 00:21:02.336 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jgljIHTMNK 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2562342 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2562342 /var/tmp/bdevperf.sock 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2562342 ']' 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.337 [2024-12-09 10:31:32.671916] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:02.337 [2024-12-09 10:31:32.671994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562342 ] 00:21:02.337 [2024-12-09 10:31:32.740264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.337 [2024-12-09 10:31:32.800089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:02.337 10:31:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jgljIHTMNK 00:21:02.337 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:02.337 [2024-12-09 10:31:33.439724] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:02.337 TLSTESTn1 00:21:02.337 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:02.337 Running I/O for 10 seconds... 00:21:03.269 3399.00 IOPS, 13.28 MiB/s [2024-12-09T09:31:37.084Z] 3490.50 IOPS, 13.63 MiB/s [2024-12-09T09:31:38.019Z] 3555.67 IOPS, 13.89 MiB/s [2024-12-09T09:31:38.953Z] 3569.25 IOPS, 13.94 MiB/s [2024-12-09T09:31:39.886Z] 3583.40 IOPS, 14.00 MiB/s [2024-12-09T09:31:40.818Z] 3575.00 IOPS, 13.96 MiB/s [2024-12-09T09:31:41.750Z] 3593.14 IOPS, 14.04 MiB/s [2024-12-09T09:31:42.682Z] 3603.75 IOPS, 14.08 MiB/s [2024-12-09T09:31:44.056Z] 3608.67 IOPS, 14.10 MiB/s [2024-12-09T09:31:44.056Z] 3614.10 IOPS, 14.12 MiB/s 00:21:11.615 Latency(us) 00:21:11.615 [2024-12-09T09:31:44.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.615 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:11.615 Verification LBA range: start 0x0 length 0x2000 00:21:11.615 TLSTESTn1 : 10.02 3619.63 14.14 0.00 0.00 35304.22 6505.05 50098.63 00:21:11.615 [2024-12-09T09:31:44.056Z] =================================================================================================================== 00:21:11.615 [2024-12-09T09:31:44.056Z] Total : 3619.63 14.14 0.00 0.00 35304.22 6505.05 50098.63 00:21:11.615 { 00:21:11.615 "results": [ 00:21:11.615 { 00:21:11.615 "job": "TLSTESTn1", 00:21:11.615 "core_mask": "0x4", 00:21:11.615 "workload": "verify", 00:21:11.615 "status": "finished", 00:21:11.615 "verify_range": { 00:21:11.615 "start": 0, 00:21:11.615 "length": 8192 00:21:11.615 }, 00:21:11.615 "queue_depth": 128, 00:21:11.615 "io_size": 4096, 00:21:11.615 "runtime": 10.020073, 00:21:11.615 "iops": 3619.6343080534443, 00:21:11.615 "mibps": 14.139196515833767, 00:21:11.615 "io_failed": 0, 00:21:11.615 "io_timeout": 0, 00:21:11.615 "avg_latency_us": 35304.21655763569, 00:21:11.615 "min_latency_us": 6505.054814814815, 00:21:11.615 "max_latency_us": 50098.63111111111 00:21:11.615 } 00:21:11.615 ], 00:21:11.615 "core_count": 1 00:21:11.615 } 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2562342 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2562342 ']' 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2562342 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2562342 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2562342' 00:21:11.615 killing process with pid 2562342 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2562342 00:21:11.615 Received shutdown signal, test time was about 10.000000 seconds 00:21:11.615 00:21:11.615 Latency(us) 00:21:11.615 [2024-12-09T09:31:44.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.615 [2024-12-09T09:31:44.056Z] =================================================================================================================== 00:21:11.615 [2024-12-09T09:31:44.056Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:11.615 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2562342 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z2o7F0zOK7 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z2o7F0zOK7 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z2o7F0zOK7 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Z2o7F0zOK7 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2563659 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2563659 /var/tmp/bdevperf.sock 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2563659 ']' 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.615 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:11.874 [2024-12-09 10:31:44.076891] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:11.874 [2024-12-09 10:31:44.076971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563659 ] 00:21:11.874 [2024-12-09 10:31:44.142090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:11.874 [2024-12-09 10:31:44.199159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.874 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.874 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:11.874 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Z2o7F0zOK7 00:21:12.441 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:12.441 [2024-12-09 10:31:44.849661] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:12.441 [2024-12-09 10:31:44.855277] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:12.441 [2024-12-09 10:31:44.855762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc06f30 (107): Transport endpoint is not connected 00:21:12.441 [2024-12-09 10:31:44.856735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc06f30 (9): Bad file descriptor 00:21:12.441 [2024-12-09 10:31:44.857735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:12.441 [2024-12-09 10:31:44.857755] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:12.441 [2024-12-09 10:31:44.857783] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:12.441 [2024-12-09 10:31:44.857801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:12.441 request: 00:21:12.441 { 00:21:12.441 "name": "TLSTEST", 00:21:12.441 "trtype": "tcp", 00:21:12.441 "traddr": "10.0.0.2", 00:21:12.441 "adrfam": "ipv4", 00:21:12.441 "trsvcid": "4420", 00:21:12.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:12.441 "prchk_reftag": false, 00:21:12.441 "prchk_guard": false, 00:21:12.441 "hdgst": false, 00:21:12.441 "ddgst": false, 00:21:12.441 "psk": "key0", 00:21:12.441 "allow_unrecognized_csi": false, 00:21:12.441 "method": "bdev_nvme_attach_controller", 00:21:12.441 "req_id": 1 00:21:12.441 } 00:21:12.441 Got JSON-RPC error response 00:21:12.441 response: 00:21:12.441 { 00:21:12.441 "code": -5, 00:21:12.441 "message": "Input/output error" 00:21:12.441 } 00:21:12.441 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2563659 00:21:12.441 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2563659 ']' 00:21:12.441 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2563659 00:21:12.441 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:12.700 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.700 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2563659 00:21:12.700 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:12.700 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:12.700 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2563659' 00:21:12.700 killing process with pid 2563659 00:21:12.700 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2563659 00:21:12.700 Received shutdown signal, test time was about 10.000000 seconds 00:21:12.700 00:21:12.700 Latency(us) 00:21:12.700 [2024-12-09T09:31:45.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.700 [2024-12-09T09:31:45.141Z] =================================================================================================================== 00:21:12.700 [2024-12-09T09:31:45.141Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:12.700 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2563659 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jgljIHTMNK 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jgljIHTMNK 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.jgljIHTMNK 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jgljIHTMNK 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2563801 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2563801 /var/tmp/bdevperf.sock 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2563801 ']' 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:12.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.962 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:12.962 [2024-12-09 10:31:45.230886] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:12.962 [2024-12-09 10:31:45.230967] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563801 ] 00:21:12.962 [2024-12-09 10:31:45.297602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.962 [2024-12-09 10:31:45.352838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.229 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.229 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:13.229 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jgljIHTMNK 00:21:13.487 10:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:13.745 [2024-12-09 10:31:46.002949] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.745 [2024-12-09 10:31:46.008220] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:13.745 [2024-12-09 10:31:46.008252] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:13.745 [2024-12-09 10:31:46.008307] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:13.745 [2024-12-09 10:31:46.008894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219df30 (107): Transport endpoint is not connected 00:21:13.745 [2024-12-09 10:31:46.009883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219df30 (9): Bad file descriptor 00:21:13.745 [2024-12-09 10:31:46.010882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:13.745 [2024-12-09 10:31:46.010906] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:13.745 [2024-12-09 10:31:46.010935] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:13.745 [2024-12-09 10:31:46.010953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:13.745 request: 00:21:13.745 { 00:21:13.745 "name": "TLSTEST", 00:21:13.745 "trtype": "tcp", 00:21:13.745 "traddr": "10.0.0.2", 00:21:13.745 "adrfam": "ipv4", 00:21:13.745 "trsvcid": "4420", 00:21:13.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:13.745 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:13.745 "prchk_reftag": false, 00:21:13.745 "prchk_guard": false, 00:21:13.745 "hdgst": false, 00:21:13.745 "ddgst": false, 00:21:13.745 "psk": "key0", 00:21:13.745 "allow_unrecognized_csi": false, 00:21:13.745 "method": "bdev_nvme_attach_controller", 00:21:13.745 "req_id": 1 00:21:13.745 } 00:21:13.745 Got JSON-RPC error response 00:21:13.745 response: 00:21:13.745 { 00:21:13.745 "code": -5, 00:21:13.745 "message": "Input/output error" 00:21:13.745 } 00:21:13.745 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2563801 00:21:13.745 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2563801 ']' 00:21:13.745 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2563801 00:21:13.745 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.745 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.745 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2563801 00:21:13.745 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:13.745 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:13.745 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2563801' 00:21:13.745 killing process with pid 2563801 00:21:13.745 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2563801 00:21:13.745 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.745 00:21:13.745 Latency(us) 00:21:13.745 [2024-12-09T09:31:46.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.745 [2024-12-09T09:31:46.186Z] =================================================================================================================== 00:21:13.745 [2024-12-09T09:31:46.186Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:13.745 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2563801 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jgljIHTMNK 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jgljIHTMNK 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.jgljIHTMNK 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jgljIHTMNK 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2563943 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2563943 /var/tmp/bdevperf.sock 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2563943 ']' 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.004 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.004 [2024-12-09 10:31:46.351437] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:14.004 [2024-12-09 10:31:46.351528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563943 ] 00:21:14.004 [2024-12-09 10:31:46.419780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.263 [2024-12-09 10:31:46.477636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.263 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.263 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:14.263 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jgljIHTMNK 00:21:14.521 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:14.779 [2024-12-09 10:31:47.119857] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.779 [2024-12-09 10:31:47.128578] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:14.779 [2024-12-09 10:31:47.128610] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:14.779 [2024-12-09 10:31:47.128674] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:14.779 [2024-12-09 10:31:47.128945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1690f30 (107): Transport endpoint is not connected 00:21:14.779 [2024-12-09 10:31:47.129932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1690f30 (9): Bad file descriptor 00:21:14.779 [2024-12-09 10:31:47.130931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:14.779 [2024-12-09 10:31:47.130950] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:14.779 [2024-12-09 10:31:47.130977] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:14.779 [2024-12-09 10:31:47.130995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:14.779 request: 00:21:14.779 { 00:21:14.779 "name": "TLSTEST", 00:21:14.779 "trtype": "tcp", 00:21:14.779 "traddr": "10.0.0.2", 00:21:14.779 "adrfam": "ipv4", 00:21:14.779 "trsvcid": "4420", 00:21:14.779 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:14.779 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.779 "prchk_reftag": false, 00:21:14.779 "prchk_guard": false, 00:21:14.779 "hdgst": false, 00:21:14.779 "ddgst": false, 00:21:14.779 "psk": "key0", 00:21:14.779 "allow_unrecognized_csi": false, 00:21:14.779 "method": "bdev_nvme_attach_controller", 00:21:14.779 "req_id": 1 00:21:14.779 } 00:21:14.779 Got JSON-RPC error response 00:21:14.779 response: 00:21:14.779 { 00:21:14.779 "code": -5, 00:21:14.779 "message": "Input/output error" 00:21:14.779 } 00:21:14.779 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2563943 00:21:14.779 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2563943 ']' 00:21:14.779 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2563943 00:21:14.779 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:14.779 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.779 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2563943 00:21:14.779 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:14.779 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:14.779 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2563943' 00:21:14.779 killing process with pid 2563943 00:21:14.779 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2563943 00:21:14.779 Received shutdown signal, test time was about 10.000000 seconds 00:21:14.779 00:21:14.779 Latency(us) 00:21:14.779 [2024-12-09T09:31:47.220Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.779 [2024-12-09T09:31:47.220Z] =================================================================================================================== 00:21:14.779 [2024-12-09T09:31:47.220Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:14.779 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2563943 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2564091 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2564091 /var/tmp/bdevperf.sock 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2564091 ']' 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.038 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.320 [2024-12-09 10:31:47.502049] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:15.320 [2024-12-09 10:31:47.502149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564091 ] 00:21:15.321 [2024-12-09 10:31:47.568025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.321 [2024-12-09 10:31:47.626618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.321 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.321 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:15.321 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:15.579 [2024-12-09 10:31:47.987056] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:15.579 [2024-12-09 10:31:47.987103] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:15.579 request: 00:21:15.579 { 00:21:15.579 "name": "key0", 00:21:15.579 "path": "", 00:21:15.579 "method": "keyring_file_add_key", 00:21:15.579 "req_id": 1 00:21:15.579 } 00:21:15.579 Got JSON-RPC error response 00:21:15.579 response: 00:21:15.579 { 00:21:15.579 "code": -1, 00:21:15.579 "message": "Operation not permitted" 00:21:15.579 } 00:21:15.579 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:15.836 [2024-12-09 10:31:48.275953] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.836 [2024-12-09 10:31:48.276025] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:16.094 request: 00:21:16.094 { 00:21:16.094 "name": "TLSTEST", 00:21:16.094 "trtype": "tcp", 00:21:16.094 "traddr": "10.0.0.2", 00:21:16.094 "adrfam": "ipv4", 00:21:16.094 "trsvcid": "4420", 00:21:16.094 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.094 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.094 "prchk_reftag": false, 00:21:16.094 "prchk_guard": false, 00:21:16.094 "hdgst": false, 00:21:16.094 "ddgst": false, 00:21:16.094 "psk": "key0", 00:21:16.094 "allow_unrecognized_csi": false, 00:21:16.094 "method": "bdev_nvme_attach_controller", 00:21:16.094 "req_id": 1 00:21:16.094 } 00:21:16.094 Got JSON-RPC error response 00:21:16.094 response: 00:21:16.094 { 00:21:16.094 "code": -126, 00:21:16.094 "message": "Required key not available" 00:21:16.094 } 00:21:16.094 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2564091 00:21:16.094 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2564091 ']' 00:21:16.094 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2564091 00:21:16.094 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.094 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.094 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2564091 00:21:16.094 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:16.094 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:16.094 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2564091' 00:21:16.094 killing process with pid 2564091 00:21:16.094 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2564091 00:21:16.094 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.094 00:21:16.094 Latency(us) 00:21:16.094 [2024-12-09T09:31:48.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.094 [2024-12-09T09:31:48.535Z] =================================================================================================================== 00:21:16.094 [2024-12-09T09:31:48.535Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:16.094 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2564091 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2560437 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2560437 ']' 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2560437 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2560437 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2560437' 00:21:16.352 killing process with pid 2560437 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2560437 00:21:16.352 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2560437 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.n2jUW9F7x5 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.n2jUW9F7x5 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2564247 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2564247 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2564247 ']' 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:16.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.610 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.610 [2024-12-09 10:31:49.024662] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:16.610 [2024-12-09 10:31:49.024743] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:16.867 [2024-12-09 10:31:49.095027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.867 [2024-12-09 10:31:49.150509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:16.867 [2024-12-09 10:31:49.150563] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:16.867 [2024-12-09 10:31:49.150592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:16.867 [2024-12-09 10:31:49.150605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:16.867 [2024-12-09 10:31:49.150614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:16.867 [2024-12-09 10:31:49.151161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.867 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.867 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:16.867 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:16.867 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.867 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.867 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.867 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.n2jUW9F7x5 00:21:16.867 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n2jUW9F7x5 00:21:16.868 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:17.124 [2024-12-09 10:31:49.533424] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.124 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:17.381 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:17.637 [2024-12-09 10:31:50.070917] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:17.637 [2024-12-09 10:31:50.071220] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.915 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:18.172 malloc0 00:21:18.172 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:18.429 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n2jUW9F7x5 00:21:18.686 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:18.943 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n2jUW9F7x5 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.n2jUW9F7x5 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2564532 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2564532 /var/tmp/bdevperf.sock 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2564532 ']' 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.944 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.944 [2024-12-09 10:31:51.244181] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:18.944 [2024-12-09 10:31:51.244259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564532 ] 00:21:18.944 [2024-12-09 10:31:51.308680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.944 [2024-12-09 10:31:51.365874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.201 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.201 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:19.201 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n2jUW9F7x5 00:21:19.459 10:31:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:19.716 [2024-12-09 10:31:52.003090] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.716 TLSTESTn1 00:21:19.716 10:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:19.974 Running I/O for 10 seconds... 00:21:21.841 3222.00 IOPS, 12.59 MiB/s [2024-12-09T09:31:55.240Z] 3314.00 IOPS, 12.95 MiB/s [2024-12-09T09:31:56.614Z] 3351.00 IOPS, 13.09 MiB/s [2024-12-09T09:31:57.545Z] 3345.50 IOPS, 13.07 MiB/s [2024-12-09T09:31:58.477Z] 3365.00 IOPS, 13.14 MiB/s [2024-12-09T09:31:59.407Z] 3358.83 IOPS, 13.12 MiB/s [2024-12-09T09:32:00.341Z] 3367.57 IOPS, 13.15 MiB/s [2024-12-09T09:32:01.316Z] 3362.62 IOPS, 13.14 MiB/s [2024-12-09T09:32:02.266Z] 3367.44 IOPS, 13.15 MiB/s [2024-12-09T09:32:02.266Z] 3369.90 IOPS, 13.16 MiB/s 00:21:29.825 Latency(us) 00:21:29.825 [2024-12-09T09:32:02.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.825 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:29.825 Verification LBA range: start 0x0 length 0x2000 00:21:29.825 TLSTESTn1 : 10.02 3376.36 13.19 0.00 0.00 37846.72 7184.69 36311.80 00:21:29.825 [2024-12-09T09:32:02.266Z] =================================================================================================================== 00:21:29.825 [2024-12-09T09:32:02.266Z] Total : 3376.36 13.19 0.00 0.00 37846.72 7184.69 36311.80 00:21:29.825 { 00:21:29.825 "results": [ 00:21:29.825 { 00:21:29.825 "job": "TLSTESTn1", 00:21:29.825 "core_mask": "0x4", 00:21:29.825 "workload": "verify", 00:21:29.825 "status": "finished", 00:21:29.825 "verify_range": { 00:21:29.825 "start": 0, 00:21:29.825 "length": 8192 00:21:29.825 }, 00:21:29.825 "queue_depth": 128, 00:21:29.825 "io_size": 4096, 00:21:29.825 "runtime": 10.018495, 00:21:29.825 "iops": 3376.355430631048, 00:21:29.825 "mibps": 13.188888400902531, 00:21:29.825 "io_failed": 0, 00:21:29.825 "io_timeout": 0, 00:21:29.825 "avg_latency_us": 37846.71673378576, 00:21:29.825 "min_latency_us": 7184.687407407408, 00:21:29.825 "max_latency_us": 36311.79851851852 00:21:29.825 } 00:21:29.825 ], 00:21:29.825 "core_count": 1 00:21:29.825 } 00:21:29.825 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.825 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2564532 00:21:29.825 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2564532 ']' 00:21:29.825 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2564532 00:21:29.825 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:30.083 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.083 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2564532 00:21:30.083 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:30.083 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:30.083 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2564532' 00:21:30.083 killing process with pid 2564532 00:21:30.083 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2564532 00:21:30.083 Received shutdown signal, test time was about 10.000000 seconds 00:21:30.083 00:21:30.083 Latency(us) 00:21:30.083 [2024-12-09T09:32:02.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.083 [2024-12-09T09:32:02.524Z] =================================================================================================================== 00:21:30.083 [2024-12-09T09:32:02.524Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.083 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2564532 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.n2jUW9F7x5 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n2jUW9F7x5 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n2jUW9F7x5 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.n2jUW9F7x5 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.n2jUW9F7x5 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2565856 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2565856 /var/tmp/bdevperf.sock 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2565856 ']' 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.341 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.341 [2024-12-09 10:32:02.632255] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:30.341 [2024-12-09 10:32:02.632336] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2565856 ] 00:21:30.341 [2024-12-09 10:32:02.701486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.341 [2024-12-09 10:32:02.760234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.599 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.599 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:30.599 10:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n2jUW9F7x5 00:21:30.857 [2024-12-09 10:32:03.134730] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.n2jUW9F7x5': 0100666 00:21:30.857 [2024-12-09 10:32:03.134777] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:30.857 request: 00:21:30.857 { 00:21:30.857 "name": "key0", 00:21:30.857 "path": "/tmp/tmp.n2jUW9F7x5", 00:21:30.857 "method": "keyring_file_add_key", 00:21:30.857 "req_id": 1 00:21:30.857 } 00:21:30.857 Got JSON-RPC error response 00:21:30.857 response: 00:21:30.857 { 00:21:30.857 "code": -1, 00:21:30.857 "message": "Operation not permitted" 00:21:30.857 } 00:21:30.857 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:31.115 [2024-12-09 10:32:03.459692] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.115 [2024-12-09 10:32:03.459758] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:31.115 request: 00:21:31.115 { 00:21:31.115 "name": "TLSTEST", 00:21:31.115 "trtype": "tcp", 00:21:31.115 "traddr": "10.0.0.2", 00:21:31.115 "adrfam": "ipv4", 00:21:31.115 "trsvcid": "4420", 00:21:31.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.115 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:31.115 "prchk_reftag": false, 00:21:31.115 "prchk_guard": false, 00:21:31.115 "hdgst": false, 00:21:31.115 "ddgst": false, 00:21:31.115 "psk": "key0", 00:21:31.115 "allow_unrecognized_csi": false, 00:21:31.115 "method": "bdev_nvme_attach_controller", 00:21:31.115 "req_id": 1 00:21:31.115 } 00:21:31.115 Got JSON-RPC error response 00:21:31.115 response: 00:21:31.115 { 00:21:31.115 "code": -126, 00:21:31.115 "message": "Required key not available" 00:21:31.115 } 00:21:31.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2565856 00:21:31.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2565856 ']' 00:21:31.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2565856 00:21:31.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:31.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2565856 00:21:31.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:31.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:31.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2565856' 00:21:31.115 killing process with pid 2565856 00:21:31.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2565856 00:21:31.115 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.115 00:21:31.115 Latency(us) 00:21:31.115 [2024-12-09T09:32:03.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.115 [2024-12-09T09:32:03.556Z] =================================================================================================================== 00:21:31.115 [2024-12-09T09:32:03.556Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:31.115 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2565856 00:21:31.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:31.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:31.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2564247 00:21:31.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2564247 ']' 00:21:31.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2564247 00:21:31.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:31.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.373 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2564247 00:21:31.631 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.631 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.631 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2564247' 00:21:31.631 killing process with pid 2564247 00:21:31.631 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2564247 00:21:31.631 10:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2564247 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2566040 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2566040 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2566040 ']' 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.889 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.889 [2024-12-09 10:32:04.159318] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:31.889 [2024-12-09 10:32:04.159401] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.889 [2024-12-09 10:32:04.233288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.889 [2024-12-09 10:32:04.290722] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.889 [2024-12-09 10:32:04.290782] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.889 [2024-12-09 10:32:04.290811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.890 [2024-12-09 10:32:04.290823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.890 [2024-12-09 10:32:04.290834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.890 [2024-12-09 10:32:04.291453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.n2jUW9F7x5 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.n2jUW9F7x5 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.n2jUW9F7x5 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n2jUW9F7x5 00:21:32.148 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:32.406 [2024-12-09 10:32:04.744708] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.406 10:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:32.664 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:33.227 [2024-12-09 10:32:05.366328] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:33.227 [2024-12-09 10:32:05.366614] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.227 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:33.484 malloc0 00:21:33.484 10:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:33.742 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n2jUW9F7x5 00:21:33.999 [2024-12-09 10:32:06.260522] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.n2jUW9F7x5': 0100666 00:21:33.999 [2024-12-09 10:32:06.260564] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:33.999 request: 00:21:33.999 { 00:21:33.999 "name": "key0", 00:21:33.999 "path": "/tmp/tmp.n2jUW9F7x5", 00:21:34.000 "method": "keyring_file_add_key", 00:21:34.000 "req_id": 1 00:21:34.000 } 00:21:34.000 Got JSON-RPC error response 00:21:34.000 response: 00:21:34.000 { 00:21:34.000 "code": -1, 00:21:34.000 "message": "Operation not permitted" 00:21:34.000 } 00:21:34.000 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:34.257 [2024-12-09 10:32:06.529280] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:34.257 [2024-12-09 10:32:06.529342] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:34.257 request: 00:21:34.257 { 00:21:34.257 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.257 "host": "nqn.2016-06.io.spdk:host1", 00:21:34.257 "psk": "key0", 00:21:34.257 "method": "nvmf_subsystem_add_host", 00:21:34.257 "req_id": 1 00:21:34.257 } 00:21:34.257 Got JSON-RPC error response 00:21:34.257 response: 00:21:34.257 { 00:21:34.257 "code": -32603, 00:21:34.257 "message": "Internal error" 00:21:34.257 } 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2566040 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2566040 ']' 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2566040 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2566040 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2566040' 00:21:34.257 killing process with pid 2566040 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2566040 00:21:34.257 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2566040 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.n2jUW9F7x5 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2566427 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2566427 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2566427 ']' 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.516 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.516 [2024-12-09 10:32:06.926149] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:34.516 [2024-12-09 10:32:06.926244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.838 [2024-12-09 10:32:06.996197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.838 [2024-12-09 10:32:07.046360] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.838 [2024-12-09 10:32:07.046422] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.838 [2024-12-09 10:32:07.046449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.838 [2024-12-09 10:32:07.046461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.838 [2024-12-09 10:32:07.046470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.838 [2024-12-09 10:32:07.047021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.839 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.839 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:34.839 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:34.839 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.839 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.839 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.839 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.n2jUW9F7x5 00:21:34.839 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n2jUW9F7x5 00:21:34.839 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:35.096 [2024-12-09 10:32:07.426450] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.096 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:35.354 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:35.612 [2024-12-09 10:32:07.967920] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.612 [2024-12-09 10:32:07.968198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.612 10:32:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:35.869 malloc0 00:21:35.869 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:36.434 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n2jUW9F7x5 00:21:36.691 10:32:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:36.949 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2566716 00:21:36.949 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:36.949 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:36.949 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2566716 /var/tmp/bdevperf.sock 00:21:36.949 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2566716 ']' 00:21:36.949 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.949 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.949 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.949 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.949 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.949 [2024-12-09 10:32:09.227039] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:36.949 [2024-12-09 10:32:09.227113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566716 ] 00:21:36.949 [2024-12-09 10:32:09.292937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.949 [2024-12-09 10:32:09.352704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.207 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.207 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:37.207 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n2jUW9F7x5 00:21:37.465 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:37.724 [2024-12-09 10:32:09.989511] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:37.724 TLSTESTn1 00:21:37.724 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:38.290 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:38.290 "subsystems": [ 00:21:38.290 { 00:21:38.290 "subsystem": "keyring", 00:21:38.290 "config": [ 00:21:38.290 { 00:21:38.290 "method": "keyring_file_add_key", 00:21:38.290 "params": { 00:21:38.290 "name": "key0", 00:21:38.290 "path": "/tmp/tmp.n2jUW9F7x5" 00:21:38.290 } 00:21:38.290 } 00:21:38.290 ] 00:21:38.290 }, 00:21:38.290 { 00:21:38.290 "subsystem": "iobuf", 00:21:38.290 "config": [ 00:21:38.290 { 00:21:38.290 "method": "iobuf_set_options", 00:21:38.290 "params": { 00:21:38.290 "small_pool_count": 8192, 00:21:38.290 "large_pool_count": 1024, 00:21:38.290 "small_bufsize": 8192, 00:21:38.290 "large_bufsize": 135168, 00:21:38.290 "enable_numa": false 00:21:38.290 } 00:21:38.290 } 00:21:38.290 ] 00:21:38.290 }, 00:21:38.290 { 00:21:38.290 "subsystem": "sock", 00:21:38.290 "config": [ 00:21:38.290 { 00:21:38.290 "method": "sock_set_default_impl", 00:21:38.290 "params": { 00:21:38.290 "impl_name": "posix" 00:21:38.290 } 00:21:38.290 }, 00:21:38.290 { 00:21:38.290 "method": "sock_impl_set_options", 00:21:38.290 "params": { 00:21:38.290 "impl_name": "ssl", 00:21:38.290 "recv_buf_size": 4096, 00:21:38.290 "send_buf_size": 4096, 00:21:38.290 "enable_recv_pipe": true, 00:21:38.290 "enable_quickack": false, 00:21:38.290 "enable_placement_id": 0, 00:21:38.290 "enable_zerocopy_send_server": true, 00:21:38.290 "enable_zerocopy_send_client": false, 00:21:38.290 "zerocopy_threshold": 0, 00:21:38.290 "tls_version": 0, 00:21:38.290 "enable_ktls": false 00:21:38.290 } 00:21:38.290 }, 00:21:38.290 { 00:21:38.290 "method": "sock_impl_set_options", 00:21:38.290 "params": { 00:21:38.290 "impl_name": "posix", 00:21:38.290 "recv_buf_size": 2097152, 00:21:38.290 "send_buf_size": 2097152, 00:21:38.290 "enable_recv_pipe": true, 00:21:38.290 "enable_quickack": false, 00:21:38.290 "enable_placement_id": 0, 00:21:38.290 "enable_zerocopy_send_server": true, 00:21:38.290 "enable_zerocopy_send_client": false, 00:21:38.290 "zerocopy_threshold": 0, 00:21:38.290 "tls_version": 0, 00:21:38.290 "enable_ktls": false 00:21:38.290 } 00:21:38.290 } 00:21:38.290 ] 00:21:38.290 }, 00:21:38.290 { 00:21:38.290 "subsystem": "vmd", 00:21:38.290 "config": [] 00:21:38.290 }, 00:21:38.290 { 00:21:38.290 "subsystem": "accel", 00:21:38.290 "config": [ 00:21:38.290 { 00:21:38.290 "method": "accel_set_options", 00:21:38.290 "params": { 00:21:38.290 "small_cache_size": 128, 00:21:38.290 "large_cache_size": 16, 00:21:38.290 "task_count": 2048, 00:21:38.290 "sequence_count": 2048, 00:21:38.290 "buf_count": 2048 00:21:38.290 } 00:21:38.290 } 00:21:38.290 ] 00:21:38.290 }, 00:21:38.290 { 00:21:38.290 "subsystem": "bdev", 00:21:38.290 "config": [ 00:21:38.290 { 00:21:38.290 "method": "bdev_set_options", 00:21:38.290 "params": { 00:21:38.290 "bdev_io_pool_size": 65535, 00:21:38.290 "bdev_io_cache_size": 256, 00:21:38.290 "bdev_auto_examine": true, 00:21:38.290 "iobuf_small_cache_size": 128, 00:21:38.290 "iobuf_large_cache_size": 16 00:21:38.290 } 00:21:38.290 }, 00:21:38.290 { 00:21:38.290 "method": "bdev_raid_set_options", 00:21:38.290 "params": { 00:21:38.290 "process_window_size_kb": 1024, 00:21:38.290 "process_max_bandwidth_mb_sec": 0 00:21:38.290 } 00:21:38.290 }, 00:21:38.290 { 00:21:38.290 "method": "bdev_iscsi_set_options", 00:21:38.290 "params": { 00:21:38.290 "timeout_sec": 30 00:21:38.290 } 00:21:38.290 }, 00:21:38.290 { 00:21:38.290 "method": "bdev_nvme_set_options", 00:21:38.290 "params": { 00:21:38.290 "action_on_timeout": "none", 00:21:38.290 "timeout_us": 0, 00:21:38.290 "timeout_admin_us": 0, 00:21:38.290 "keep_alive_timeout_ms": 10000, 00:21:38.290 "arbitration_burst": 0, 00:21:38.290 "low_priority_weight": 0, 00:21:38.290 "medium_priority_weight": 0, 00:21:38.290 "high_priority_weight": 0, 00:21:38.290 "nvme_adminq_poll_period_us": 10000, 00:21:38.290 "nvme_ioq_poll_period_us": 0, 00:21:38.290 "io_queue_requests": 0, 00:21:38.290 "delay_cmd_submit": true, 00:21:38.290 "transport_retry_count": 4, 00:21:38.290 "bdev_retry_count": 3, 00:21:38.290 "transport_ack_timeout": 0, 00:21:38.291 "ctrlr_loss_timeout_sec": 0, 00:21:38.291 "reconnect_delay_sec": 0, 00:21:38.291 "fast_io_fail_timeout_sec": 0, 00:21:38.291 "disable_auto_failback": false, 00:21:38.291 "generate_uuids": false, 00:21:38.291 "transport_tos": 0, 00:21:38.291 "nvme_error_stat": false, 00:21:38.291 "rdma_srq_size": 0, 00:21:38.291 "io_path_stat": false, 00:21:38.291 "allow_accel_sequence": false, 00:21:38.291 "rdma_max_cq_size": 0, 00:21:38.291 "rdma_cm_event_timeout_ms": 0, 00:21:38.291 "dhchap_digests": [ 00:21:38.291 "sha256", 00:21:38.291 "sha384", 00:21:38.291 "sha512" 00:21:38.291 ], 00:21:38.291 "dhchap_dhgroups": [ 00:21:38.291 "null", 00:21:38.291 "ffdhe2048", 00:21:38.291 "ffdhe3072", 00:21:38.291 "ffdhe4096", 00:21:38.291 "ffdhe6144", 00:21:38.291 "ffdhe8192" 00:21:38.291 ] 00:21:38.291 } 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "method": "bdev_nvme_set_hotplug", 00:21:38.291 "params": { 00:21:38.291 "period_us": 100000, 00:21:38.291 "enable": false 00:21:38.291 } 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "method": "bdev_malloc_create", 00:21:38.291 "params": { 00:21:38.291 "name": "malloc0", 00:21:38.291 "num_blocks": 8192, 00:21:38.291 "block_size": 4096, 00:21:38.291 "physical_block_size": 4096, 00:21:38.291 "uuid": "1ec53d0a-ce6f-4b2f-afef-760189753e0b", 00:21:38.291 "optimal_io_boundary": 0, 00:21:38.291 "md_size": 0, 00:21:38.291 "dif_type": 0, 00:21:38.291 "dif_is_head_of_md": false, 00:21:38.291 "dif_pi_format": 0 00:21:38.291 } 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "method": "bdev_wait_for_examine" 00:21:38.291 } 00:21:38.291 ] 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "subsystem": "nbd", 00:21:38.291 "config": [] 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "subsystem": "scheduler", 00:21:38.291 "config": [ 00:21:38.291 { 00:21:38.291 "method": "framework_set_scheduler", 00:21:38.291 "params": { 00:21:38.291 "name": "static" 00:21:38.291 } 00:21:38.291 } 00:21:38.291 ] 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "subsystem": "nvmf", 00:21:38.291 "config": [ 00:21:38.291 { 00:21:38.291 "method": "nvmf_set_config", 00:21:38.291 "params": { 00:21:38.291 "discovery_filter": "match_any", 00:21:38.291 "admin_cmd_passthru": { 00:21:38.291 "identify_ctrlr": false 00:21:38.291 }, 00:21:38.291 "dhchap_digests": [ 00:21:38.291 "sha256", 00:21:38.291 "sha384", 00:21:38.291 "sha512" 00:21:38.291 ], 00:21:38.291 "dhchap_dhgroups": [ 00:21:38.291 "null", 00:21:38.291 "ffdhe2048", 00:21:38.291 "ffdhe3072", 00:21:38.291 "ffdhe4096", 00:21:38.291 "ffdhe6144", 00:21:38.291 "ffdhe8192" 00:21:38.291 ] 00:21:38.291 } 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "method": "nvmf_set_max_subsystems", 00:21:38.291 "params": { 00:21:38.291 "max_subsystems": 1024 00:21:38.291 } 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "method": "nvmf_set_crdt", 00:21:38.291 "params": { 00:21:38.291 "crdt1": 0, 00:21:38.291 "crdt2": 0, 00:21:38.291 "crdt3": 0 00:21:38.291 } 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "method": "nvmf_create_transport", 00:21:38.291 "params": { 00:21:38.291 "trtype": "TCP", 00:21:38.291 "max_queue_depth": 128, 00:21:38.291 "max_io_qpairs_per_ctrlr": 127, 00:21:38.291 "in_capsule_data_size": 4096, 00:21:38.291 "max_io_size": 131072, 00:21:38.291 "io_unit_size": 131072, 00:21:38.291 "max_aq_depth": 128, 00:21:38.291 "num_shared_buffers": 511, 00:21:38.291 "buf_cache_size": 4294967295, 00:21:38.291 "dif_insert_or_strip": false, 00:21:38.291 "zcopy": false, 00:21:38.291 "c2h_success": false, 00:21:38.291 "sock_priority": 0, 00:21:38.291 "abort_timeout_sec": 1, 00:21:38.291 "ack_timeout": 0, 00:21:38.291 "data_wr_pool_size": 0 00:21:38.291 } 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "method": "nvmf_create_subsystem", 00:21:38.291 "params": { 00:21:38.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.291 "allow_any_host": false, 00:21:38.291 "serial_number": "SPDK00000000000001", 00:21:38.291 "model_number": "SPDK bdev Controller", 00:21:38.291 "max_namespaces": 10, 00:21:38.291 "min_cntlid": 1, 00:21:38.291 "max_cntlid": 65519, 00:21:38.291 "ana_reporting": false 00:21:38.291 } 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "method": "nvmf_subsystem_add_host", 00:21:38.291 "params": { 00:21:38.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.291 "host": "nqn.2016-06.io.spdk:host1", 00:21:38.291 "psk": "key0" 00:21:38.291 } 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "method": "nvmf_subsystem_add_ns", 00:21:38.291 "params": { 00:21:38.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.291 "namespace": { 00:21:38.291 "nsid": 1, 00:21:38.291 "bdev_name": "malloc0", 00:21:38.291 "nguid": "1EC53D0ACE6F4B2FAFEF760189753E0B", 00:21:38.291 "uuid": "1ec53d0a-ce6f-4b2f-afef-760189753e0b", 00:21:38.291 "no_auto_visible": false 00:21:38.291 } 00:21:38.291 } 00:21:38.291 }, 00:21:38.291 { 00:21:38.291 "method": "nvmf_subsystem_add_listener", 00:21:38.291 "params": { 00:21:38.291 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.291 "listen_address": { 00:21:38.291 "trtype": "TCP", 00:21:38.291 "adrfam": "IPv4", 00:21:38.291 "traddr": "10.0.0.2", 00:21:38.291 "trsvcid": "4420" 00:21:38.291 }, 00:21:38.291 "secure_channel": true 00:21:38.291 } 00:21:38.291 } 00:21:38.291 ] 00:21:38.291 } 00:21:38.291 ] 00:21:38.291 }' 00:21:38.291 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:38.550 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:38.550 "subsystems": [ 00:21:38.550 { 00:21:38.550 "subsystem": "keyring", 00:21:38.550 "config": [ 00:21:38.550 { 00:21:38.550 "method": "keyring_file_add_key", 00:21:38.550 "params": { 00:21:38.550 "name": "key0", 00:21:38.550 "path": "/tmp/tmp.n2jUW9F7x5" 00:21:38.550 } 00:21:38.550 } 00:21:38.550 ] 00:21:38.550 }, 00:21:38.550 { 00:21:38.550 "subsystem": "iobuf", 00:21:38.550 "config": [ 00:21:38.550 { 00:21:38.550 "method": "iobuf_set_options", 00:21:38.550 "params": { 00:21:38.550 "small_pool_count": 8192, 00:21:38.550 "large_pool_count": 1024, 00:21:38.550 "small_bufsize": 8192, 00:21:38.550 "large_bufsize": 135168, 00:21:38.550 "enable_numa": false 00:21:38.550 } 00:21:38.550 } 00:21:38.550 ] 00:21:38.550 }, 00:21:38.550 { 00:21:38.550 "subsystem": "sock", 00:21:38.550 "config": [ 00:21:38.550 { 00:21:38.550 "method": "sock_set_default_impl", 00:21:38.550 "params": { 00:21:38.550 "impl_name": "posix" 00:21:38.550 } 00:21:38.550 }, 00:21:38.550 { 00:21:38.550 "method": "sock_impl_set_options", 00:21:38.550 "params": { 00:21:38.550 "impl_name": "ssl", 00:21:38.551 "recv_buf_size": 4096, 00:21:38.551 "send_buf_size": 4096, 00:21:38.551 "enable_recv_pipe": true, 00:21:38.551 "enable_quickack": false, 00:21:38.551 "enable_placement_id": 0, 00:21:38.551 "enable_zerocopy_send_server": true, 00:21:38.551 "enable_zerocopy_send_client": false, 00:21:38.551 "zerocopy_threshold": 0, 00:21:38.551 "tls_version": 0, 00:21:38.551 "enable_ktls": false 00:21:38.551 } 00:21:38.551 }, 00:21:38.551 { 00:21:38.551 "method": "sock_impl_set_options", 00:21:38.551 "params": { 00:21:38.551 "impl_name": "posix", 00:21:38.551 "recv_buf_size": 2097152, 00:21:38.551 "send_buf_size": 2097152, 00:21:38.551 "enable_recv_pipe": true, 00:21:38.551 "enable_quickack": false, 00:21:38.551 "enable_placement_id": 0, 00:21:38.551 "enable_zerocopy_send_server": true, 00:21:38.551 "enable_zerocopy_send_client": false, 00:21:38.551 "zerocopy_threshold": 0, 00:21:38.551 "tls_version": 0, 00:21:38.551 "enable_ktls": false 00:21:38.551 } 00:21:38.551 } 00:21:38.551 ] 00:21:38.551 }, 00:21:38.551 { 00:21:38.551 "subsystem": "vmd", 00:21:38.551 "config": [] 00:21:38.551 }, 00:21:38.551 { 00:21:38.551 "subsystem": "accel", 00:21:38.551 "config": [ 00:21:38.551 { 00:21:38.551 "method": "accel_set_options", 00:21:38.551 "params": { 00:21:38.551 "small_cache_size": 128, 00:21:38.551 "large_cache_size": 16, 00:21:38.551 "task_count": 2048, 00:21:38.551 "sequence_count": 2048, 00:21:38.551 "buf_count": 2048 00:21:38.551 } 00:21:38.551 } 00:21:38.551 ] 00:21:38.551 }, 00:21:38.551 { 00:21:38.551 "subsystem": "bdev", 00:21:38.551 "config": [ 00:21:38.551 { 00:21:38.551 "method": "bdev_set_options", 00:21:38.551 "params": { 00:21:38.551 "bdev_io_pool_size": 65535, 00:21:38.551 "bdev_io_cache_size": 256, 00:21:38.551 "bdev_auto_examine": true, 00:21:38.551 "iobuf_small_cache_size": 128, 00:21:38.551 "iobuf_large_cache_size": 16 00:21:38.551 } 00:21:38.551 }, 00:21:38.551 { 00:21:38.551 "method": "bdev_raid_set_options", 00:21:38.551 "params": { 00:21:38.551 "process_window_size_kb": 1024, 00:21:38.551 "process_max_bandwidth_mb_sec": 0 00:21:38.551 } 00:21:38.551 }, 00:21:38.551 { 00:21:38.551 "method": "bdev_iscsi_set_options", 00:21:38.551 "params": { 00:21:38.551 "timeout_sec": 30 00:21:38.551 } 00:21:38.551 }, 00:21:38.551 { 00:21:38.551 "method": "bdev_nvme_set_options", 00:21:38.551 "params": { 00:21:38.551 "action_on_timeout": "none", 00:21:38.551 "timeout_us": 0, 00:21:38.551 "timeout_admin_us": 0, 00:21:38.551 "keep_alive_timeout_ms": 10000, 00:21:38.551 "arbitration_burst": 0, 00:21:38.551 "low_priority_weight": 0, 00:21:38.551 "medium_priority_weight": 0, 00:21:38.551 "high_priority_weight": 0, 00:21:38.551 "nvme_adminq_poll_period_us": 10000, 00:21:38.551 "nvme_ioq_poll_period_us": 0, 00:21:38.551 "io_queue_requests": 512, 00:21:38.551 "delay_cmd_submit": true, 00:21:38.551 "transport_retry_count": 4, 00:21:38.551 "bdev_retry_count": 3, 00:21:38.551 "transport_ack_timeout": 0, 00:21:38.551 "ctrlr_loss_timeout_sec": 0, 00:21:38.551 "reconnect_delay_sec": 0, 00:21:38.551 "fast_io_fail_timeout_sec": 0, 00:21:38.551 "disable_auto_failback": false, 00:21:38.551 "generate_uuids": false, 00:21:38.551 "transport_tos": 0, 00:21:38.551 "nvme_error_stat": false, 00:21:38.551 "rdma_srq_size": 0, 00:21:38.551 "io_path_stat": false, 00:21:38.551 "allow_accel_sequence": false, 00:21:38.551 "rdma_max_cq_size": 0, 00:21:38.551 "rdma_cm_event_timeout_ms": 0, 00:21:38.551 "dhchap_digests": [ 00:21:38.551 "sha256", 00:21:38.551 "sha384", 00:21:38.551 "sha512" 00:21:38.551 ], 00:21:38.551 "dhchap_dhgroups": [ 00:21:38.551 "null", 00:21:38.551 "ffdhe2048", 00:21:38.551 "ffdhe3072", 00:21:38.551 "ffdhe4096", 00:21:38.551 "ffdhe6144", 00:21:38.551 "ffdhe8192" 00:21:38.551 ] 00:21:38.551 } 00:21:38.551 }, 00:21:38.551 { 00:21:38.551 "method": "bdev_nvme_attach_controller", 00:21:38.551 "params": { 00:21:38.551 "name": "TLSTEST", 00:21:38.551 "trtype": "TCP", 00:21:38.551 "adrfam": "IPv4", 00:21:38.551 "traddr": "10.0.0.2", 00:21:38.551 "trsvcid": "4420", 00:21:38.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.551 "prchk_reftag": false, 00:21:38.551 "prchk_guard": false, 00:21:38.551 "ctrlr_loss_timeout_sec": 0, 00:21:38.551 "reconnect_delay_sec": 0, 00:21:38.551 "fast_io_fail_timeout_sec": 0, 00:21:38.551 "psk": "key0", 00:21:38.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.551 "hdgst": false, 00:21:38.551 "ddgst": false, 00:21:38.551 "multipath": "multipath" 00:21:38.551 } 00:21:38.551 }, 00:21:38.551 { 00:21:38.551 "method": "bdev_nvme_set_hotplug", 00:21:38.551 "params": { 00:21:38.551 "period_us": 100000, 00:21:38.551 "enable": false 00:21:38.551 } 00:21:38.551 }, 00:21:38.551 { 00:21:38.551 "method": "bdev_wait_for_examine" 00:21:38.551 } 00:21:38.551 ] 00:21:38.551 }, 00:21:38.551 { 00:21:38.551 "subsystem": "nbd", 00:21:38.551 "config": [] 00:21:38.551 } 00:21:38.551 ] 00:21:38.551 }' 00:21:38.551 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2566716 00:21:38.551 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2566716 ']' 00:21:38.552 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2566716 00:21:38.552 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:38.552 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.552 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2566716 00:21:38.552 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:38.552 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:38.552 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2566716' 00:21:38.552 killing process with pid 2566716 00:21:38.552 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2566716 00:21:38.552 Received shutdown signal, test time was about 10.000000 seconds 00:21:38.552 00:21:38.552 Latency(us) 00:21:38.552 [2024-12-09T09:32:10.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.552 [2024-12-09T09:32:10.993Z] =================================================================================================================== 00:21:38.552 [2024-12-09T09:32:10.993Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:38.552 10:32:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2566716 00:21:38.809 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2566427 00:21:38.809 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2566427 ']' 00:21:38.809 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2566427 00:21:38.809 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:38.809 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.809 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2566427 00:21:38.809 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:38.809 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:38.809 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2566427' 00:21:38.809 killing process with pid 2566427 00:21:38.809 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2566427 00:21:38.809 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2566427 00:21:39.068 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:39.068 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:39.068 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:39.068 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:39.068 "subsystems": [ 00:21:39.068 { 00:21:39.068 "subsystem": "keyring", 00:21:39.068 "config": [ 00:21:39.068 { 00:21:39.068 "method": "keyring_file_add_key", 00:21:39.068 "params": { 00:21:39.068 "name": "key0", 00:21:39.068 "path": "/tmp/tmp.n2jUW9F7x5" 00:21:39.068 } 00:21:39.068 } 00:21:39.068 ] 00:21:39.068 }, 00:21:39.068 { 00:21:39.068 "subsystem": "iobuf", 00:21:39.068 "config": [ 00:21:39.068 { 00:21:39.068 "method": "iobuf_set_options", 00:21:39.068 "params": { 00:21:39.068 "small_pool_count": 8192, 00:21:39.068 "large_pool_count": 1024, 00:21:39.068 "small_bufsize": 8192, 00:21:39.068 "large_bufsize": 135168, 00:21:39.068 "enable_numa": false 00:21:39.068 } 00:21:39.068 } 00:21:39.068 ] 00:21:39.068 }, 00:21:39.068 { 00:21:39.068 "subsystem": "sock", 00:21:39.068 "config": [ 00:21:39.068 { 00:21:39.068 "method": "sock_set_default_impl", 00:21:39.068 "params": { 00:21:39.068 "impl_name": "posix" 00:21:39.068 } 00:21:39.068 }, 00:21:39.068 { 00:21:39.068 "method": "sock_impl_set_options", 00:21:39.068 "params": { 00:21:39.068 "impl_name": "ssl", 00:21:39.068 "recv_buf_size": 4096, 00:21:39.068 "send_buf_size": 4096, 00:21:39.068 "enable_recv_pipe": true, 00:21:39.068 "enable_quickack": false, 00:21:39.068 "enable_placement_id": 0, 00:21:39.068 "enable_zerocopy_send_server": true, 00:21:39.068 "enable_zerocopy_send_client": false, 00:21:39.068 "zerocopy_threshold": 0, 00:21:39.068 "tls_version": 0, 00:21:39.068 "enable_ktls": false 00:21:39.068 } 00:21:39.068 }, 00:21:39.068 { 00:21:39.068 "method": "sock_impl_set_options", 00:21:39.068 "params": { 00:21:39.068 "impl_name": "posix", 00:21:39.068 "recv_buf_size": 2097152, 00:21:39.068 "send_buf_size": 2097152, 00:21:39.068 "enable_recv_pipe": true, 00:21:39.068 "enable_quickack": false, 00:21:39.068 "enable_placement_id": 0, 00:21:39.068 "enable_zerocopy_send_server": true, 00:21:39.068 "enable_zerocopy_send_client": false, 00:21:39.068 "zerocopy_threshold": 0, 00:21:39.068 "tls_version": 0, 00:21:39.068 "enable_ktls": false 00:21:39.068 } 00:21:39.068 } 00:21:39.068 ] 00:21:39.068 }, 00:21:39.068 { 00:21:39.068 "subsystem": "vmd", 00:21:39.068 "config": [] 00:21:39.068 }, 00:21:39.068 { 00:21:39.068 "subsystem": "accel", 00:21:39.068 "config": [ 00:21:39.068 { 00:21:39.068 "method": "accel_set_options", 00:21:39.068 "params": { 00:21:39.068 "small_cache_size": 128, 00:21:39.068 "large_cache_size": 16, 00:21:39.068 "task_count": 2048, 00:21:39.068 "sequence_count": 2048, 00:21:39.068 "buf_count": 2048 00:21:39.068 } 00:21:39.068 } 00:21:39.068 ] 00:21:39.068 }, 00:21:39.068 { 00:21:39.068 "subsystem": "bdev", 00:21:39.068 "config": [ 00:21:39.068 { 00:21:39.068 "method": "bdev_set_options", 00:21:39.068 "params": { 00:21:39.068 "bdev_io_pool_size": 65535, 00:21:39.068 "bdev_io_cache_size": 256, 00:21:39.068 "bdev_auto_examine": true, 00:21:39.068 "iobuf_small_cache_size": 128, 00:21:39.068 "iobuf_large_cache_size": 16 00:21:39.068 } 00:21:39.068 }, 00:21:39.068 { 00:21:39.068 "method": "bdev_raid_set_options", 00:21:39.068 "params": { 00:21:39.068 "process_window_size_kb": 1024, 00:21:39.068 "process_max_bandwidth_mb_sec": 0 00:21:39.068 } 00:21:39.068 }, 00:21:39.068 { 00:21:39.068 "method": "bdev_iscsi_set_options", 00:21:39.068 "params": { 00:21:39.068 "timeout_sec": 30 00:21:39.068 } 00:21:39.068 }, 00:21:39.068 { 00:21:39.068 "method": "bdev_nvme_set_options", 00:21:39.068 "params": { 00:21:39.068 "action_on_timeout": "none", 00:21:39.068 "timeout_us": 0, 00:21:39.068 "timeout_admin_us": 0, 00:21:39.068 "keep_alive_timeout_ms": 10000, 00:21:39.068 "arbitration_burst": 0, 00:21:39.068 "low_priority_weight": 0, 00:21:39.068 "medium_priority_weight": 0, 00:21:39.068 "high_priority_weight": 0, 00:21:39.068 "nvme_adminq_poll_period_us": 10000, 00:21:39.068 "nvme_ioq_poll_period_us": 0, 00:21:39.068 "io_queue_requests": 0, 00:21:39.068 "delay_cmd_submit": true, 00:21:39.068 "transport_retry_count": 4, 00:21:39.068 "bdev_retry_count": 3, 00:21:39.068 "transport_ack_timeout": 0, 00:21:39.068 "ctrlr_loss_timeout_sec": 0, 00:21:39.068 "reconnect_delay_sec": 0, 00:21:39.068 "fast_io_fail_timeout_sec": 0, 00:21:39.068 "disable_auto_failback": false, 00:21:39.068 "generate_uuids": false, 00:21:39.068 "transport_tos": 0, 00:21:39.068 "nvme_error_stat": false, 00:21:39.068 "rdma_srq_size": 0, 00:21:39.068 "io_path_stat": false, 00:21:39.068 "allow_accel_sequence": false, 00:21:39.068 "rdma_max_cq_size": 0, 00:21:39.068 "rdma_cm_event_timeout_ms": 0, 00:21:39.068 "dhchap_digests": [ 00:21:39.068 "sha256", 00:21:39.068 "sha384", 00:21:39.068 "sha512" 00:21:39.068 ], 00:21:39.068 "dhchap_dhgroups": [ 00:21:39.068 "null", 00:21:39.068 "ffdhe2048", 00:21:39.068 "ffdhe3072", 00:21:39.068 "ffdhe4096", 00:21:39.068 "ffdhe6144", 00:21:39.068 "ffdhe8192" 00:21:39.068 ] 00:21:39.068 } 00:21:39.068 }, 00:21:39.068 { 00:21:39.068 "method": "bdev_nvme_set_hotplug", 00:21:39.068 "params": { 00:21:39.069 "period_us": 100000, 00:21:39.069 "enable": false 00:21:39.069 } 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "method": "bdev_malloc_create", 00:21:39.069 "params": { 00:21:39.069 "name": "malloc0", 00:21:39.069 "num_blocks": 8192, 00:21:39.069 "block_size": 4096, 00:21:39.069 "physical_block_size": 4096, 00:21:39.069 "uuid": "1ec53d0a-ce6f-4b2f-afef-760189753e0b", 00:21:39.069 "optimal_io_boundary": 0, 00:21:39.069 "md_size": 0, 00:21:39.069 "dif_type": 0, 00:21:39.069 "dif_is_head_of_md": false, 00:21:39.069 "dif_pi_format": 0 00:21:39.069 } 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "method": "bdev_wait_for_examine" 00:21:39.069 } 00:21:39.069 ] 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "subsystem": "nbd", 00:21:39.069 "config": [] 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "subsystem": "scheduler", 00:21:39.069 "config": [ 00:21:39.069 { 00:21:39.069 "method": "framework_set_scheduler", 00:21:39.069 "params": { 00:21:39.069 "name": "static" 00:21:39.069 } 00:21:39.069 } 00:21:39.069 ] 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "subsystem": "nvmf", 00:21:39.069 "config": [ 00:21:39.069 { 00:21:39.069 "method": "nvmf_set_config", 00:21:39.069 "params": { 00:21:39.069 "discovery_filter": "match_any", 00:21:39.069 "admin_cmd_passthru": { 00:21:39.069 "identify_ctrlr": false 00:21:39.069 }, 00:21:39.069 "dhchap_digests": [ 00:21:39.069 "sha256", 00:21:39.069 "sha384", 00:21:39.069 "sha512" 00:21:39.069 ], 00:21:39.069 "dhchap_dhgroups": [ 00:21:39.069 "null", 00:21:39.069 "ffdhe2048", 00:21:39.069 "ffdhe3072", 00:21:39.069 "ffdhe4096", 00:21:39.069 "ffdhe6144", 00:21:39.069 "ffdhe8192" 00:21:39.069 ] 00:21:39.069 } 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "method": "nvmf_set_max_subsystems", 00:21:39.069 "params": { 00:21:39.069 "max_subsystems": 1024 00:21:39.069 } 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "method": "nvmf_set_crdt", 00:21:39.069 "params": { 00:21:39.069 "crdt1": 0, 00:21:39.069 "crdt2": 0, 00:21:39.069 "crdt3": 0 00:21:39.069 } 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "method": "nvmf_create_transport", 00:21:39.069 "params": { 00:21:39.069 "trtype": "TCP", 00:21:39.069 "max_queue_depth": 128, 00:21:39.069 "max_io_qpairs_per_ctrlr": 127, 00:21:39.069 "in_capsule_data_size": 4096, 00:21:39.069 "max_io_size": 131072, 00:21:39.069 "io_unit_size": 131072, 00:21:39.069 "max_aq_depth": 128, 00:21:39.069 "num_shared_buffers": 511, 00:21:39.069 "buf_cache_size": 4294967295, 00:21:39.069 "dif_insert_or_strip": false, 00:21:39.069 "zcopy": false, 00:21:39.069 "c2h_success": false, 00:21:39.069 "sock_priority": 0, 00:21:39.069 "abort_timeout_sec": 1, 00:21:39.069 "ack_timeout": 0, 00:21:39.069 "data_wr_pool_size": 0 00:21:39.069 } 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "method": "nvmf_create_subsystem", 00:21:39.069 "params": { 00:21:39.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.069 "allow_any_host": false, 00:21:39.069 "serial_number": "SPDK00000000000001", 00:21:39.069 "model_number": "SPDK bdev Controller", 00:21:39.069 "max_namespaces": 10, 00:21:39.069 "min_cntlid": 1, 00:21:39.069 "max_cntlid": 65519, 00:21:39.069 "ana_reporting": false 00:21:39.069 } 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "method": "nvmf_subsystem_add_host", 00:21:39.069 "params": { 00:21:39.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.069 "host": "nqn.2016-06.io.spdk:host1", 00:21:39.069 "psk": "key0" 00:21:39.069 } 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "method": "nvmf_subsystem_add_ns", 00:21:39.069 "params": { 00:21:39.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.069 "namespace": { 00:21:39.069 "nsid": 1, 00:21:39.069 "bdev_name": "malloc0", 00:21:39.069 "nguid": "1EC53D0ACE6F4B2FAFEF760189753E0B", 00:21:39.069 "uuid": "1ec53d0a-ce6f-4b2f-afef-760189753e0b", 00:21:39.069 "no_auto_visible": false 00:21:39.069 } 00:21:39.069 } 00:21:39.069 }, 00:21:39.069 { 00:21:39.069 "method": "nvmf_subsystem_add_listener", 00:21:39.069 "params": { 00:21:39.069 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.069 "listen_address": { 00:21:39.069 "trtype": "TCP", 00:21:39.069 "adrfam": "IPv4", 00:21:39.069 "traddr": "10.0.0.2", 00:21:39.069 "trsvcid": "4420" 00:21:39.069 }, 00:21:39.069 "secure_channel": true 00:21:39.069 } 00:21:39.069 } 00:21:39.069 ] 00:21:39.069 } 00:21:39.069 ] 00:21:39.069 }' 00:21:39.069 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.069 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2566996 00:21:39.069 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:39.069 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2566996 00:21:39.069 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2566996 ']' 00:21:39.069 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.069 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:39.069 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.069 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:39.069 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.069 [2024-12-09 10:32:11.481703] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:39.069 [2024-12-09 10:32:11.481774] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.328 [2024-12-09 10:32:11.551670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.328 [2024-12-09 10:32:11.609780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.328 [2024-12-09 10:32:11.609836] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.328 [2024-12-09 10:32:11.609865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:39.328 [2024-12-09 10:32:11.609878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:39.328 [2024-12-09 10:32:11.609887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.328 [2024-12-09 10:32:11.610573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.585 [2024-12-09 10:32:11.856614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.585 [2024-12-09 10:32:11.888631] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:39.585 [2024-12-09 10:32:11.888914] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2567147 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2567147 /var/tmp/bdevperf.sock 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2567147 ']' 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.150 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:40.150 "subsystems": [ 00:21:40.150 { 00:21:40.150 "subsystem": "keyring", 00:21:40.150 "config": [ 00:21:40.150 { 00:21:40.150 "method": "keyring_file_add_key", 00:21:40.150 "params": { 00:21:40.150 "name": "key0", 00:21:40.150 "path": "/tmp/tmp.n2jUW9F7x5" 00:21:40.150 } 00:21:40.150 } 00:21:40.150 ] 00:21:40.150 }, 00:21:40.150 { 00:21:40.150 "subsystem": "iobuf", 00:21:40.150 "config": [ 00:21:40.150 { 00:21:40.150 "method": "iobuf_set_options", 00:21:40.150 "params": { 00:21:40.150 "small_pool_count": 8192, 00:21:40.150 "large_pool_count": 1024, 00:21:40.150 "small_bufsize": 8192, 00:21:40.150 "large_bufsize": 135168, 00:21:40.150 "enable_numa": false 00:21:40.150 } 00:21:40.150 } 00:21:40.150 ] 00:21:40.150 }, 00:21:40.150 { 00:21:40.150 "subsystem": "sock", 00:21:40.150 "config": [ 00:21:40.150 { 00:21:40.150 "method": "sock_set_default_impl", 00:21:40.150 "params": { 00:21:40.150 "impl_name": "posix" 00:21:40.150 } 00:21:40.150 }, 00:21:40.150 { 00:21:40.150 "method": "sock_impl_set_options", 00:21:40.150 "params": { 00:21:40.150 "impl_name": "ssl", 00:21:40.150 "recv_buf_size": 4096, 00:21:40.150 "send_buf_size": 4096, 00:21:40.150 "enable_recv_pipe": true, 00:21:40.150 "enable_quickack": false, 00:21:40.150 "enable_placement_id": 0, 00:21:40.150 "enable_zerocopy_send_server": true, 00:21:40.150 "enable_zerocopy_send_client": false, 00:21:40.150 "zerocopy_threshold": 0, 00:21:40.150 "tls_version": 0, 00:21:40.150 "enable_ktls": false 00:21:40.150 } 00:21:40.150 }, 00:21:40.150 { 00:21:40.150 "method": "sock_impl_set_options", 00:21:40.150 "params": { 00:21:40.151 "impl_name": "posix", 00:21:40.151 "recv_buf_size": 2097152, 00:21:40.151 "send_buf_size": 2097152, 00:21:40.151 "enable_recv_pipe": true, 00:21:40.151 "enable_quickack": false, 00:21:40.151 "enable_placement_id": 0, 00:21:40.151 "enable_zerocopy_send_server": true, 00:21:40.151 "enable_zerocopy_send_client": false, 00:21:40.151 "zerocopy_threshold": 0, 00:21:40.151 "tls_version": 0, 00:21:40.151 "enable_ktls": false 00:21:40.151 } 00:21:40.151 } 00:21:40.151 ] 00:21:40.151 }, 00:21:40.151 { 00:21:40.151 "subsystem": "vmd", 00:21:40.151 "config": [] 00:21:40.151 }, 00:21:40.151 { 00:21:40.151 "subsystem": "accel", 00:21:40.151 "config": [ 00:21:40.151 { 00:21:40.151 "method": "accel_set_options", 00:21:40.151 "params": { 00:21:40.151 "small_cache_size": 128, 00:21:40.151 "large_cache_size": 16, 00:21:40.151 "task_count": 2048, 00:21:40.151 "sequence_count": 2048, 00:21:40.151 "buf_count": 2048 00:21:40.151 } 00:21:40.151 } 00:21:40.151 ] 00:21:40.151 }, 00:21:40.151 { 00:21:40.151 "subsystem": "bdev", 00:21:40.151 "config": [ 00:21:40.151 { 00:21:40.151 "method": "bdev_set_options", 00:21:40.151 "params": { 00:21:40.151 "bdev_io_pool_size": 65535, 00:21:40.151 "bdev_io_cache_size": 256, 00:21:40.151 "bdev_auto_examine": true, 00:21:40.151 "iobuf_small_cache_size": 128, 00:21:40.151 "iobuf_large_cache_size": 16 00:21:40.151 } 00:21:40.151 }, 00:21:40.151 { 00:21:40.151 "method": "bdev_raid_set_options", 00:21:40.151 "params": { 00:21:40.151 "process_window_size_kb": 1024, 00:21:40.151 "process_max_bandwidth_mb_sec": 0 00:21:40.151 } 00:21:40.151 }, 00:21:40.151 { 00:21:40.151 "method": "bdev_iscsi_set_options", 00:21:40.151 "params": { 00:21:40.151 "timeout_sec": 30 00:21:40.151 } 00:21:40.151 }, 00:21:40.151 { 00:21:40.151 "method": "bdev_nvme_set_options", 00:21:40.151 "params": { 00:21:40.151 "action_on_timeout": "none", 00:21:40.151 "timeout_us": 0, 00:21:40.151 "timeout_admin_us": 0, 00:21:40.151 "keep_alive_timeout_ms": 10000, 00:21:40.151 "arbitration_burst": 0, 00:21:40.151 "low_priority_weight": 0, 00:21:40.151 "medium_priority_weight": 0, 00:21:40.151 "high_priority_weight": 0, 00:21:40.151 "nvme_adminq_poll_period_us": 10000, 00:21:40.151 "nvme_ioq_poll_period_us": 0, 00:21:40.151 "io_queue_requests": 512, 00:21:40.151 "delay_cmd_submit": true, 00:21:40.151 "transport_retry_count": 4, 00:21:40.151 "bdev_retry_count": 3, 00:21:40.151 "transport_ack_timeout": 0, 00:21:40.151 "ctrlr_loss_timeout_sec": 0, 00:21:40.151 "reconnect_delay_sec": 0, 00:21:40.151 "fast_io_fail_timeout_sec": 0, 00:21:40.151 "disable_auto_failback": false, 00:21:40.151 "generate_uuids": false, 00:21:40.151 "transport_tos": 0, 00:21:40.151 "nvme_error_stat": false, 00:21:40.151 "rdma_srq_size": 0, 00:21:40.151 "io_path_stat": false, 00:21:40.151 "allow_accel_sequence": false, 00:21:40.151 "rdma_max_cq_size": 0, 00:21:40.151 "rdma_cm_event_timeout_ms": 0, 00:21:40.151 "dhchap_digests": [ 00:21:40.151 "sha256", 00:21:40.151 "sha384", 00:21:40.151 "sha512" 00:21:40.151 ], 00:21:40.151 "dhchap_dhgroups": [ 00:21:40.151 "null", 00:21:40.151 "ffdhe2048", 00:21:40.151 "ffdhe3072", 00:21:40.151 "ffdhe4096", 00:21:40.151 "ffdhe6144", 00:21:40.151 "ffdhe8192" 00:21:40.151 ] 00:21:40.151 } 00:21:40.151 }, 00:21:40.151 { 00:21:40.151 "method": "bdev_nvme_attach_controller", 00:21:40.151 "params": { 00:21:40.151 "name": "TLSTEST", 00:21:40.151 "trtype": "TCP", 00:21:40.151 "adrfam": "IPv4", 00:21:40.151 "traddr": "10.0.0.2", 00:21:40.151 "trsvcid": "4420", 00:21:40.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.151 "prchk_reftag": false, 00:21:40.151 "prchk_guard": false, 00:21:40.151 "ctrlr_loss_timeout_sec": 0, 00:21:40.151 "reconnect_delay_sec": 0, 00:21:40.151 "fast_io_fail_timeout_sec": 0, 00:21:40.151 "psk": "key0", 00:21:40.151 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.151 "hdgst": false, 00:21:40.151 "ddgst": false, 00:21:40.151 "multipath": "multipath" 00:21:40.151 } 00:21:40.151 }, 00:21:40.151 { 00:21:40.151 "method": "bdev_nvme_set_hotplug", 00:21:40.151 "params": { 00:21:40.151 "period_us": 100000, 00:21:40.151 "enable": false 00:21:40.151 } 00:21:40.151 }, 00:21:40.151 { 00:21:40.151 "method": "bdev_wait_for_examine" 00:21:40.151 } 00:21:40.151 ] 00:21:40.151 }, 00:21:40.151 { 00:21:40.151 "subsystem": "nbd", 00:21:40.151 "config": [] 00:21:40.151 } 00:21:40.151 ] 00:21:40.151 }' 00:21:40.151 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.151 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.151 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.410 [2024-12-09 10:32:12.607429] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:40.410 [2024-12-09 10:32:12.607535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567147 ] 00:21:40.410 [2024-12-09 10:32:12.673347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.410 [2024-12-09 10:32:12.732357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.668 [2024-12-09 10:32:12.913995] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.668 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:40.668 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:40.668 10:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:40.926 Running I/O for 10 seconds... 00:21:42.799 3277.00 IOPS, 12.80 MiB/s [2024-12-09T09:32:16.172Z] 3394.00 IOPS, 13.26 MiB/s [2024-12-09T09:32:17.554Z] 3428.33 IOPS, 13.39 MiB/s [2024-12-09T09:32:18.491Z] 3425.00 IOPS, 13.38 MiB/s [2024-12-09T09:32:19.425Z] 3421.20 IOPS, 13.36 MiB/s [2024-12-09T09:32:20.358Z] 3425.00 IOPS, 13.38 MiB/s [2024-12-09T09:32:21.292Z] 3443.14 IOPS, 13.45 MiB/s [2024-12-09T09:32:22.228Z] 3447.50 IOPS, 13.47 MiB/s [2024-12-09T09:32:23.600Z] 3443.22 IOPS, 13.45 MiB/s [2024-12-09T09:32:23.600Z] 3445.90 IOPS, 13.46 MiB/s 00:21:51.159 Latency(us) 00:21:51.159 [2024-12-09T09:32:23.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.159 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:51.159 Verification LBA range: start 0x0 length 0x2000 00:21:51.159 TLSTESTn1 : 10.02 3451.84 13.48 0.00 0.00 37018.62 8009.96 53982.25 00:21:51.159 [2024-12-09T09:32:23.600Z] =================================================================================================================== 00:21:51.159 [2024-12-09T09:32:23.600Z] Total : 3451.84 13.48 0.00 0.00 37018.62 8009.96 53982.25 00:21:51.159 { 00:21:51.159 "results": [ 00:21:51.159 { 00:21:51.159 "job": "TLSTESTn1", 00:21:51.159 "core_mask": "0x4", 00:21:51.159 "workload": "verify", 00:21:51.159 "status": "finished", 00:21:51.159 "verify_range": { 00:21:51.159 "start": 0, 00:21:51.159 "length": 8192 00:21:51.159 }, 00:21:51.159 "queue_depth": 128, 00:21:51.159 "io_size": 4096, 00:21:51.159 "runtime": 10.019595, 00:21:51.159 "iops": 3451.83612710893, 00:21:51.159 "mibps": 13.483734871519259, 00:21:51.159 "io_failed": 0, 00:21:51.159 "io_timeout": 0, 00:21:51.159 "avg_latency_us": 37018.62110286543, 00:21:51.159 "min_latency_us": 8009.955555555555, 00:21:51.159 "max_latency_us": 53982.24592592593 00:21:51.159 } 00:21:51.159 ], 00:21:51.159 "core_count": 1 00:21:51.159 } 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2567147 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2567147 ']' 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2567147 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2567147 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2567147' 00:21:51.159 killing process with pid 2567147 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2567147 00:21:51.159 Received shutdown signal, test time was about 10.000000 seconds 00:21:51.159 00:21:51.159 Latency(us) 00:21:51.159 [2024-12-09T09:32:23.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:51.159 [2024-12-09T09:32:23.600Z] =================================================================================================================== 00:21:51.159 [2024-12-09T09:32:23.600Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2567147 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2566996 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2566996 ']' 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2566996 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2566996 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2566996' 00:21:51.159 killing process with pid 2566996 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2566996 00:21:51.159 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2566996 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2568469 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2568469 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2568469 ']' 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.416 10:32:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.673 [2024-12-09 10:32:23.900173] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:51.673 [2024-12-09 10:32:23.900253] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.673 [2024-12-09 10:32:23.970771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.673 [2024-12-09 10:32:24.024538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.673 [2024-12-09 10:32:24.024595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.673 [2024-12-09 10:32:24.024623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.673 [2024-12-09 10:32:24.024634] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.673 [2024-12-09 10:32:24.024643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.673 [2024-12-09 10:32:24.025184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.931 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.931 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:51.931 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:51.931 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.931 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.931 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:51.931 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.n2jUW9F7x5 00:21:51.931 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.n2jUW9F7x5 00:21:51.931 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:52.189 [2024-12-09 10:32:24.409621] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.189 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:52.446 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:52.703 [2024-12-09 10:32:24.959150] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:52.703 [2024-12-09 10:32:24.959451] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.703 10:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:52.960 malloc0 00:21:52.960 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:53.218 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.n2jUW9F7x5 00:21:53.475 10:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:53.732 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2568767 00:21:53.732 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:53.732 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2568767 /var/tmp/bdevperf.sock 00:21:53.732 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:53.732 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2568767 ']' 00:21:53.732 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.733 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.733 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.733 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.733 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.991 [2024-12-09 10:32:26.177313] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:53.991 [2024-12-09 10:32:26.177386] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2568767 ] 00:21:53.991 [2024-12-09 10:32:26.253510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.991 [2024-12-09 10:32:26.324481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.248 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:54.248 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:54.248 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n2jUW9F7x5 00:21:54.506 10:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:54.763 [2024-12-09 10:32:27.015774] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.763 nvme0n1 00:21:54.763 10:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:55.022 Running I/O for 1 seconds... 00:21:55.964 3395.00 IOPS, 13.26 MiB/s 00:21:55.964 Latency(us) 00:21:55.964 [2024-12-09T09:32:28.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.964 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:55.964 Verification LBA range: start 0x0 length 0x2000 00:21:55.964 nvme0n1 : 1.02 3456.32 13.50 0.00 0.00 36701.64 7184.69 32816.55 00:21:55.964 [2024-12-09T09:32:28.405Z] =================================================================================================================== 00:21:55.964 [2024-12-09T09:32:28.405Z] Total : 3456.32 13.50 0.00 0.00 36701.64 7184.69 32816.55 00:21:55.964 { 00:21:55.964 "results": [ 00:21:55.964 { 00:21:55.964 "job": "nvme0n1", 00:21:55.964 "core_mask": "0x2", 00:21:55.964 "workload": "verify", 00:21:55.964 "status": "finished", 00:21:55.964 "verify_range": { 00:21:55.964 "start": 0, 00:21:55.964 "length": 8192 00:21:55.964 }, 00:21:55.964 "queue_depth": 128, 00:21:55.964 "io_size": 4096, 00:21:55.964 "runtime": 1.019292, 00:21:55.964 "iops": 3456.3206617926953, 00:21:55.964 "mibps": 13.501252585127716, 00:21:55.964 "io_failed": 0, 00:21:55.964 "io_timeout": 0, 00:21:55.964 "avg_latency_us": 36701.641296453985, 00:21:55.964 "min_latency_us": 7184.687407407408, 00:21:55.964 "max_latency_us": 32816.54518518518 00:21:55.964 } 00:21:55.964 ], 00:21:55.964 "core_count": 1 00:21:55.964 } 00:21:55.964 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2568767 00:21:55.964 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2568767 ']' 00:21:55.964 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2568767 00:21:55.964 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:55.964 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.964 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2568767 00:21:55.964 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:55.964 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:55.964 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2568767' 00:21:55.964 killing process with pid 2568767 00:21:55.964 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2568767 00:21:55.964 Received shutdown signal, test time was about 1.000000 seconds 00:21:55.964 00:21:55.964 Latency(us) 00:21:55.964 [2024-12-09T09:32:28.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.964 [2024-12-09T09:32:28.405Z] =================================================================================================================== 00:21:55.964 [2024-12-09T09:32:28.405Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:55.964 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2568767 00:21:56.222 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2568469 00:21:56.222 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2568469 ']' 00:21:56.222 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2568469 00:21:56.222 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:56.222 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.222 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2568469 00:21:56.222 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.222 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.222 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2568469' 00:21:56.222 killing process with pid 2568469 00:21:56.222 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2568469 00:21:56.222 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2568469 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2569044 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2569044 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2569044 ']' 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.482 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.741 [2024-12-09 10:32:28.945597] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:56.741 [2024-12-09 10:32:28.945690] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.741 [2024-12-09 10:32:29.016982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.741 [2024-12-09 10:32:29.075802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.741 [2024-12-09 10:32:29.075858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.741 [2024-12-09 10:32:29.075887] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.741 [2024-12-09 10:32:29.075898] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.741 [2024-12-09 10:32:29.075908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.741 [2024-12-09 10:32:29.076499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.999 [2024-12-09 10:32:29.219794] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.999 malloc0 00:21:56.999 [2024-12-09 10:32:29.250959] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:56.999 [2024-12-09 10:32:29.251293] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2569193 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2569193 /var/tmp/bdevperf.sock 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2569193 ']' 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.999 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.999 [2024-12-09 10:32:29.327767] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:21:56.999 [2024-12-09 10:32:29.327844] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569193 ] 00:21:56.999 [2024-12-09 10:32:29.393636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.258 [2024-12-09 10:32:29.452667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.258 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.258 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:57.258 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.n2jUW9F7x5 00:21:57.516 10:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:57.774 [2024-12-09 10:32:30.074852] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.774 nvme0n1 00:21:57.774 10:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:58.107 Running I/O for 1 seconds... 00:21:59.127 3481.00 IOPS, 13.60 MiB/s 00:21:59.127 Latency(us) 00:21:59.127 [2024-12-09T09:32:31.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.127 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:59.127 Verification LBA range: start 0x0 length 0x2000 00:21:59.127 nvme0n1 : 1.02 3543.92 13.84 0.00 0.00 35801.54 6019.60 32234.00 00:21:59.127 [2024-12-09T09:32:31.568Z] =================================================================================================================== 00:21:59.127 [2024-12-09T09:32:31.568Z] Total : 3543.92 13.84 0.00 0.00 35801.54 6019.60 32234.00 00:21:59.127 { 00:21:59.127 "results": [ 00:21:59.127 { 00:21:59.127 "job": "nvme0n1", 00:21:59.127 "core_mask": "0x2", 00:21:59.127 "workload": "verify", 00:21:59.127 "status": "finished", 00:21:59.127 "verify_range": { 00:21:59.127 "start": 0, 00:21:59.127 "length": 8192 00:21:59.127 }, 00:21:59.127 "queue_depth": 128, 00:21:59.127 "io_size": 4096, 00:21:59.127 "runtime": 1.018363, 00:21:59.127 "iops": 3543.9229429977327, 00:21:59.127 "mibps": 13.843448996084893, 00:21:59.127 "io_failed": 0, 00:21:59.127 "io_timeout": 0, 00:21:59.127 "avg_latency_us": 35801.5393910286, 00:21:59.127 "min_latency_us": 6019.602962962963, 00:21:59.127 "max_latency_us": 32234.002962962964 00:21:59.127 } 00:21:59.127 ], 00:21:59.127 "core_count": 1 00:21:59.127 } 00:21:59.127 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:59.127 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.127 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.127 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.127 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:59.127 "subsystems": [ 00:21:59.127 { 00:21:59.128 "subsystem": "keyring", 00:21:59.128 "config": [ 00:21:59.128 { 00:21:59.128 "method": "keyring_file_add_key", 00:21:59.128 "params": { 00:21:59.128 "name": "key0", 00:21:59.128 "path": "/tmp/tmp.n2jUW9F7x5" 00:21:59.128 } 00:21:59.128 } 00:21:59.128 ] 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "subsystem": "iobuf", 00:21:59.128 "config": [ 00:21:59.128 { 00:21:59.128 "method": "iobuf_set_options", 00:21:59.128 "params": { 00:21:59.128 "small_pool_count": 8192, 00:21:59.128 "large_pool_count": 1024, 00:21:59.128 "small_bufsize": 8192, 00:21:59.128 "large_bufsize": 135168, 00:21:59.128 "enable_numa": false 00:21:59.128 } 00:21:59.128 } 00:21:59.128 ] 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "subsystem": "sock", 00:21:59.128 "config": [ 00:21:59.128 { 00:21:59.128 "method": "sock_set_default_impl", 00:21:59.128 "params": { 00:21:59.128 "impl_name": "posix" 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "sock_impl_set_options", 00:21:59.128 "params": { 00:21:59.128 "impl_name": "ssl", 00:21:59.128 "recv_buf_size": 4096, 00:21:59.128 "send_buf_size": 4096, 00:21:59.128 "enable_recv_pipe": true, 00:21:59.128 "enable_quickack": false, 00:21:59.128 "enable_placement_id": 0, 00:21:59.128 "enable_zerocopy_send_server": true, 00:21:59.128 "enable_zerocopy_send_client": false, 00:21:59.128 "zerocopy_threshold": 0, 00:21:59.128 "tls_version": 0, 00:21:59.128 "enable_ktls": false 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "sock_impl_set_options", 00:21:59.128 "params": { 00:21:59.128 "impl_name": "posix", 00:21:59.128 "recv_buf_size": 2097152, 00:21:59.128 "send_buf_size": 2097152, 00:21:59.128 "enable_recv_pipe": true, 00:21:59.128 "enable_quickack": false, 00:21:59.128 "enable_placement_id": 0, 00:21:59.128 "enable_zerocopy_send_server": true, 00:21:59.128 "enable_zerocopy_send_client": false, 00:21:59.128 "zerocopy_threshold": 0, 00:21:59.128 "tls_version": 0, 00:21:59.128 "enable_ktls": false 00:21:59.128 } 00:21:59.128 } 00:21:59.128 ] 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "subsystem": "vmd", 00:21:59.128 "config": [] 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "subsystem": "accel", 00:21:59.128 "config": [ 00:21:59.128 { 00:21:59.128 "method": "accel_set_options", 00:21:59.128 "params": { 00:21:59.128 "small_cache_size": 128, 00:21:59.128 "large_cache_size": 16, 00:21:59.128 "task_count": 2048, 00:21:59.128 "sequence_count": 2048, 00:21:59.128 "buf_count": 2048 00:21:59.128 } 00:21:59.128 } 00:21:59.128 ] 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "subsystem": "bdev", 00:21:59.128 "config": [ 00:21:59.128 { 00:21:59.128 "method": "bdev_set_options", 00:21:59.128 "params": { 00:21:59.128 "bdev_io_pool_size": 65535, 00:21:59.128 "bdev_io_cache_size": 256, 00:21:59.128 "bdev_auto_examine": true, 00:21:59.128 "iobuf_small_cache_size": 128, 00:21:59.128 "iobuf_large_cache_size": 16 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "bdev_raid_set_options", 00:21:59.128 "params": { 00:21:59.128 "process_window_size_kb": 1024, 00:21:59.128 "process_max_bandwidth_mb_sec": 0 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "bdev_iscsi_set_options", 00:21:59.128 "params": { 00:21:59.128 "timeout_sec": 30 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "bdev_nvme_set_options", 00:21:59.128 "params": { 00:21:59.128 "action_on_timeout": "none", 00:21:59.128 "timeout_us": 0, 00:21:59.128 "timeout_admin_us": 0, 00:21:59.128 "keep_alive_timeout_ms": 10000, 00:21:59.128 "arbitration_burst": 0, 00:21:59.128 "low_priority_weight": 0, 00:21:59.128 "medium_priority_weight": 0, 00:21:59.128 "high_priority_weight": 0, 00:21:59.128 "nvme_adminq_poll_period_us": 10000, 00:21:59.128 "nvme_ioq_poll_period_us": 0, 00:21:59.128 "io_queue_requests": 0, 00:21:59.128 "delay_cmd_submit": true, 00:21:59.128 "transport_retry_count": 4, 00:21:59.128 "bdev_retry_count": 3, 00:21:59.128 "transport_ack_timeout": 0, 00:21:59.128 "ctrlr_loss_timeout_sec": 0, 00:21:59.128 "reconnect_delay_sec": 0, 00:21:59.128 "fast_io_fail_timeout_sec": 0, 00:21:59.128 "disable_auto_failback": false, 00:21:59.128 "generate_uuids": false, 00:21:59.128 "transport_tos": 0, 00:21:59.128 "nvme_error_stat": false, 00:21:59.128 "rdma_srq_size": 0, 00:21:59.128 "io_path_stat": false, 00:21:59.128 "allow_accel_sequence": false, 00:21:59.128 "rdma_max_cq_size": 0, 00:21:59.128 "rdma_cm_event_timeout_ms": 0, 00:21:59.128 "dhchap_digests": [ 00:21:59.128 "sha256", 00:21:59.128 "sha384", 00:21:59.128 "sha512" 00:21:59.128 ], 00:21:59.128 "dhchap_dhgroups": [ 00:21:59.128 "null", 00:21:59.128 "ffdhe2048", 00:21:59.128 "ffdhe3072", 00:21:59.128 "ffdhe4096", 00:21:59.128 "ffdhe6144", 00:21:59.128 "ffdhe8192" 00:21:59.128 ] 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "bdev_nvme_set_hotplug", 00:21:59.128 "params": { 00:21:59.128 "period_us": 100000, 00:21:59.128 "enable": false 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "bdev_malloc_create", 00:21:59.128 "params": { 00:21:59.128 "name": "malloc0", 00:21:59.128 "num_blocks": 8192, 00:21:59.128 "block_size": 4096, 00:21:59.128 "physical_block_size": 4096, 00:21:59.128 "uuid": "05849fd9-d058-4415-b55a-2a0c0f1f2550", 00:21:59.128 "optimal_io_boundary": 0, 00:21:59.128 "md_size": 0, 00:21:59.128 "dif_type": 0, 00:21:59.128 "dif_is_head_of_md": false, 00:21:59.128 "dif_pi_format": 0 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "bdev_wait_for_examine" 00:21:59.128 } 00:21:59.128 ] 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "subsystem": "nbd", 00:21:59.128 "config": [] 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "subsystem": "scheduler", 00:21:59.128 "config": [ 00:21:59.128 { 00:21:59.128 "method": "framework_set_scheduler", 00:21:59.128 "params": { 00:21:59.128 "name": "static" 00:21:59.128 } 00:21:59.128 } 00:21:59.128 ] 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "subsystem": "nvmf", 00:21:59.128 "config": [ 00:21:59.128 { 00:21:59.128 "method": "nvmf_set_config", 00:21:59.128 "params": { 00:21:59.128 "discovery_filter": "match_any", 00:21:59.128 "admin_cmd_passthru": { 00:21:59.128 "identify_ctrlr": false 00:21:59.128 }, 00:21:59.128 "dhchap_digests": [ 00:21:59.128 "sha256", 00:21:59.128 "sha384", 00:21:59.128 "sha512" 00:21:59.128 ], 00:21:59.128 "dhchap_dhgroups": [ 00:21:59.128 "null", 00:21:59.128 "ffdhe2048", 00:21:59.128 "ffdhe3072", 00:21:59.128 "ffdhe4096", 00:21:59.128 "ffdhe6144", 00:21:59.128 "ffdhe8192" 00:21:59.128 ] 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "nvmf_set_max_subsystems", 00:21:59.128 "params": { 00:21:59.128 "max_subsystems": 1024 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "nvmf_set_crdt", 00:21:59.128 "params": { 00:21:59.128 "crdt1": 0, 00:21:59.128 "crdt2": 0, 00:21:59.128 "crdt3": 0 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "nvmf_create_transport", 00:21:59.128 "params": { 00:21:59.128 "trtype": "TCP", 00:21:59.128 "max_queue_depth": 128, 00:21:59.128 "max_io_qpairs_per_ctrlr": 127, 00:21:59.128 "in_capsule_data_size": 4096, 00:21:59.128 "max_io_size": 131072, 00:21:59.128 "io_unit_size": 131072, 00:21:59.128 "max_aq_depth": 128, 00:21:59.128 "num_shared_buffers": 511, 00:21:59.128 "buf_cache_size": 4294967295, 00:21:59.128 "dif_insert_or_strip": false, 00:21:59.128 "zcopy": false, 00:21:59.128 "c2h_success": false, 00:21:59.128 "sock_priority": 0, 00:21:59.128 "abort_timeout_sec": 1, 00:21:59.128 "ack_timeout": 0, 00:21:59.128 "data_wr_pool_size": 0 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "nvmf_create_subsystem", 00:21:59.128 "params": { 00:21:59.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.128 "allow_any_host": false, 00:21:59.128 "serial_number": "00000000000000000000", 00:21:59.128 "model_number": "SPDK bdev Controller", 00:21:59.128 "max_namespaces": 32, 00:21:59.128 "min_cntlid": 1, 00:21:59.128 "max_cntlid": 65519, 00:21:59.128 "ana_reporting": false 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "nvmf_subsystem_add_host", 00:21:59.128 "params": { 00:21:59.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.128 "host": "nqn.2016-06.io.spdk:host1", 00:21:59.128 "psk": "key0" 00:21:59.128 } 00:21:59.128 }, 00:21:59.128 { 00:21:59.128 "method": "nvmf_subsystem_add_ns", 00:21:59.128 "params": { 00:21:59.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.128 "namespace": { 00:21:59.128 "nsid": 1, 00:21:59.128 "bdev_name": "malloc0", 00:21:59.128 "nguid": "05849FD9D0584415B55A2A0C0F1F2550", 00:21:59.128 "uuid": "05849fd9-d058-4415-b55a-2a0c0f1f2550", 00:21:59.129 "no_auto_visible": false 00:21:59.129 } 00:21:59.129 } 00:21:59.129 }, 00:21:59.129 { 00:21:59.129 "method": "nvmf_subsystem_add_listener", 00:21:59.129 "params": { 00:21:59.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.129 "listen_address": { 00:21:59.129 "trtype": "TCP", 00:21:59.129 "adrfam": "IPv4", 00:21:59.129 "traddr": "10.0.0.2", 00:21:59.129 "trsvcid": "4420" 00:21:59.129 }, 00:21:59.129 "secure_channel": false, 00:21:59.129 "sock_impl": "ssl" 00:21:59.129 } 00:21:59.129 } 00:21:59.129 ] 00:21:59.129 } 00:21:59.129 ] 00:21:59.129 }' 00:21:59.129 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:59.387 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:59.387 "subsystems": [ 00:21:59.387 { 00:21:59.387 "subsystem": "keyring", 00:21:59.387 "config": [ 00:21:59.387 { 00:21:59.387 "method": "keyring_file_add_key", 00:21:59.387 "params": { 00:21:59.387 "name": "key0", 00:21:59.387 "path": "/tmp/tmp.n2jUW9F7x5" 00:21:59.387 } 00:21:59.387 } 00:21:59.387 ] 00:21:59.387 }, 00:21:59.387 { 00:21:59.387 "subsystem": "iobuf", 00:21:59.387 "config": [ 00:21:59.387 { 00:21:59.387 "method": "iobuf_set_options", 00:21:59.387 "params": { 00:21:59.387 "small_pool_count": 8192, 00:21:59.387 "large_pool_count": 1024, 00:21:59.387 "small_bufsize": 8192, 00:21:59.387 "large_bufsize": 135168, 00:21:59.387 "enable_numa": false 00:21:59.387 } 00:21:59.387 } 00:21:59.387 ] 00:21:59.387 }, 00:21:59.387 { 00:21:59.387 "subsystem": "sock", 00:21:59.387 "config": [ 00:21:59.387 { 00:21:59.387 "method": "sock_set_default_impl", 00:21:59.387 "params": { 00:21:59.387 "impl_name": "posix" 00:21:59.387 } 00:21:59.387 }, 00:21:59.387 { 00:21:59.387 "method": "sock_impl_set_options", 00:21:59.387 "params": { 00:21:59.387 "impl_name": "ssl", 00:21:59.387 "recv_buf_size": 4096, 00:21:59.387 "send_buf_size": 4096, 00:21:59.387 "enable_recv_pipe": true, 00:21:59.387 "enable_quickack": false, 00:21:59.387 "enable_placement_id": 0, 00:21:59.387 "enable_zerocopy_send_server": true, 00:21:59.387 "enable_zerocopy_send_client": false, 00:21:59.387 "zerocopy_threshold": 0, 00:21:59.387 "tls_version": 0, 00:21:59.387 "enable_ktls": false 00:21:59.387 } 00:21:59.387 }, 00:21:59.387 { 00:21:59.387 "method": "sock_impl_set_options", 00:21:59.387 "params": { 00:21:59.387 "impl_name": "posix", 00:21:59.387 "recv_buf_size": 2097152, 00:21:59.387 "send_buf_size": 2097152, 00:21:59.387 "enable_recv_pipe": true, 00:21:59.387 "enable_quickack": false, 00:21:59.387 "enable_placement_id": 0, 00:21:59.387 "enable_zerocopy_send_server": true, 00:21:59.387 "enable_zerocopy_send_client": false, 00:21:59.387 "zerocopy_threshold": 0, 00:21:59.387 "tls_version": 0, 00:21:59.387 "enable_ktls": false 00:21:59.387 } 00:21:59.387 } 00:21:59.387 ] 00:21:59.387 }, 00:21:59.387 { 00:21:59.387 "subsystem": "vmd", 00:21:59.387 "config": [] 00:21:59.387 }, 00:21:59.387 { 00:21:59.387 "subsystem": "accel", 00:21:59.387 "config": [ 00:21:59.387 { 00:21:59.387 "method": "accel_set_options", 00:21:59.387 "params": { 00:21:59.387 "small_cache_size": 128, 00:21:59.387 "large_cache_size": 16, 00:21:59.387 "task_count": 2048, 00:21:59.387 "sequence_count": 2048, 00:21:59.387 "buf_count": 2048 00:21:59.387 } 00:21:59.387 } 00:21:59.387 ] 00:21:59.387 }, 00:21:59.387 { 00:21:59.387 "subsystem": "bdev", 00:21:59.387 "config": [ 00:21:59.387 { 00:21:59.387 "method": "bdev_set_options", 00:21:59.387 "params": { 00:21:59.387 "bdev_io_pool_size": 65535, 00:21:59.387 "bdev_io_cache_size": 256, 00:21:59.387 "bdev_auto_examine": true, 00:21:59.387 "iobuf_small_cache_size": 128, 00:21:59.387 "iobuf_large_cache_size": 16 00:21:59.387 } 00:21:59.387 }, 00:21:59.387 { 00:21:59.387 "method": "bdev_raid_set_options", 00:21:59.387 "params": { 00:21:59.387 "process_window_size_kb": 1024, 00:21:59.387 "process_max_bandwidth_mb_sec": 0 00:21:59.387 } 00:21:59.387 }, 00:21:59.387 { 00:21:59.387 "method": "bdev_iscsi_set_options", 00:21:59.387 "params": { 00:21:59.387 "timeout_sec": 30 00:21:59.387 } 00:21:59.387 }, 00:21:59.387 { 00:21:59.387 "method": "bdev_nvme_set_options", 00:21:59.387 "params": { 00:21:59.387 "action_on_timeout": "none", 00:21:59.387 "timeout_us": 0, 00:21:59.387 "timeout_admin_us": 0, 00:21:59.387 "keep_alive_timeout_ms": 10000, 00:21:59.387 "arbitration_burst": 0, 00:21:59.387 "low_priority_weight": 0, 00:21:59.387 "medium_priority_weight": 0, 00:21:59.387 "high_priority_weight": 0, 00:21:59.387 "nvme_adminq_poll_period_us": 10000, 00:21:59.387 "nvme_ioq_poll_period_us": 0, 00:21:59.387 "io_queue_requests": 512, 00:21:59.387 "delay_cmd_submit": true, 00:21:59.387 "transport_retry_count": 4, 00:21:59.387 "bdev_retry_count": 3, 00:21:59.387 "transport_ack_timeout": 0, 00:21:59.387 "ctrlr_loss_timeout_sec": 0, 00:21:59.387 "reconnect_delay_sec": 0, 00:21:59.387 "fast_io_fail_timeout_sec": 0, 00:21:59.387 "disable_auto_failback": false, 00:21:59.387 "generate_uuids": false, 00:21:59.387 "transport_tos": 0, 00:21:59.387 "nvme_error_stat": false, 00:21:59.387 "rdma_srq_size": 0, 00:21:59.387 "io_path_stat": false, 00:21:59.387 "allow_accel_sequence": false, 00:21:59.387 "rdma_max_cq_size": 0, 00:21:59.387 "rdma_cm_event_timeout_ms": 0, 00:21:59.387 "dhchap_digests": [ 00:21:59.387 "sha256", 00:21:59.387 "sha384", 00:21:59.387 "sha512" 00:21:59.387 ], 00:21:59.387 "dhchap_dhgroups": [ 00:21:59.387 "null", 00:21:59.387 "ffdhe2048", 00:21:59.387 "ffdhe3072", 00:21:59.387 "ffdhe4096", 00:21:59.387 "ffdhe6144", 00:21:59.387 "ffdhe8192" 00:21:59.387 ] 00:21:59.387 } 00:21:59.387 }, 00:21:59.387 { 00:21:59.387 "method": "bdev_nvme_attach_controller", 00:21:59.387 "params": { 00:21:59.387 "name": "nvme0", 00:21:59.387 "trtype": "TCP", 00:21:59.387 "adrfam": "IPv4", 00:21:59.387 "traddr": "10.0.0.2", 00:21:59.387 "trsvcid": "4420", 00:21:59.387 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.387 "prchk_reftag": false, 00:21:59.388 "prchk_guard": false, 00:21:59.388 "ctrlr_loss_timeout_sec": 0, 00:21:59.388 "reconnect_delay_sec": 0, 00:21:59.388 "fast_io_fail_timeout_sec": 0, 00:21:59.388 "psk": "key0", 00:21:59.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.388 "hdgst": false, 00:21:59.388 "ddgst": false, 00:21:59.388 "multipath": "multipath" 00:21:59.388 } 00:21:59.388 }, 00:21:59.388 { 00:21:59.388 "method": "bdev_nvme_set_hotplug", 00:21:59.388 "params": { 00:21:59.388 "period_us": 100000, 00:21:59.388 "enable": false 00:21:59.388 } 00:21:59.388 }, 00:21:59.388 { 00:21:59.388 "method": "bdev_enable_histogram", 00:21:59.388 "params": { 00:21:59.388 "name": "nvme0n1", 00:21:59.388 "enable": true 00:21:59.388 } 00:21:59.388 }, 00:21:59.388 { 00:21:59.388 "method": "bdev_wait_for_examine" 00:21:59.388 } 00:21:59.388 ] 00:21:59.388 }, 00:21:59.388 { 00:21:59.388 "subsystem": "nbd", 00:21:59.388 "config": [] 00:21:59.388 } 00:21:59.388 ] 00:21:59.388 }' 00:21:59.388 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2569193 00:21:59.388 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2569193 ']' 00:21:59.388 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2569193 00:21:59.388 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:59.388 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.388 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2569193 00:21:59.388 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:59.388 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:59.388 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2569193' 00:21:59.388 killing process with pid 2569193 00:21:59.388 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2569193 00:21:59.388 Received shutdown signal, test time was about 1.000000 seconds 00:21:59.388 00:21:59.388 Latency(us) 00:21:59.388 [2024-12-09T09:32:31.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.388 [2024-12-09T09:32:31.829Z] =================================================================================================================== 00:21:59.388 [2024-12-09T09:32:31.829Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.388 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2569193 00:21:59.646 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2569044 00:21:59.646 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2569044 ']' 00:21:59.646 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2569044 00:21:59.646 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:59.646 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.646 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2569044 00:21:59.904 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.904 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.904 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2569044' 00:21:59.904 killing process with pid 2569044 00:21:59.904 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2569044 00:21:59.904 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2569044 00:22:00.162 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:00.162 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.162 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:00.162 "subsystems": [ 00:22:00.163 { 00:22:00.163 "subsystem": "keyring", 00:22:00.163 "config": [ 00:22:00.163 { 00:22:00.163 "method": "keyring_file_add_key", 00:22:00.163 "params": { 00:22:00.163 "name": "key0", 00:22:00.163 "path": "/tmp/tmp.n2jUW9F7x5" 00:22:00.163 } 00:22:00.163 } 00:22:00.163 ] 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "subsystem": "iobuf", 00:22:00.163 "config": [ 00:22:00.163 { 00:22:00.163 "method": "iobuf_set_options", 00:22:00.163 "params": { 00:22:00.163 "small_pool_count": 8192, 00:22:00.163 "large_pool_count": 1024, 00:22:00.163 "small_bufsize": 8192, 00:22:00.163 "large_bufsize": 135168, 00:22:00.163 "enable_numa": false 00:22:00.163 } 00:22:00.163 } 00:22:00.163 ] 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "subsystem": "sock", 00:22:00.163 "config": [ 00:22:00.163 { 00:22:00.163 "method": "sock_set_default_impl", 00:22:00.163 "params": { 00:22:00.163 "impl_name": "posix" 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "sock_impl_set_options", 00:22:00.163 "params": { 00:22:00.163 "impl_name": "ssl", 00:22:00.163 "recv_buf_size": 4096, 00:22:00.163 "send_buf_size": 4096, 00:22:00.163 "enable_recv_pipe": true, 00:22:00.163 "enable_quickack": false, 00:22:00.163 "enable_placement_id": 0, 00:22:00.163 "enable_zerocopy_send_server": true, 00:22:00.163 "enable_zerocopy_send_client": false, 00:22:00.163 "zerocopy_threshold": 0, 00:22:00.163 "tls_version": 0, 00:22:00.163 "enable_ktls": false 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "sock_impl_set_options", 00:22:00.163 "params": { 00:22:00.163 "impl_name": "posix", 00:22:00.163 "recv_buf_size": 2097152, 00:22:00.163 "send_buf_size": 2097152, 00:22:00.163 "enable_recv_pipe": true, 00:22:00.163 "enable_quickack": false, 00:22:00.163 "enable_placement_id": 0, 00:22:00.163 "enable_zerocopy_send_server": true, 00:22:00.163 "enable_zerocopy_send_client": false, 00:22:00.163 "zerocopy_threshold": 0, 00:22:00.163 "tls_version": 0, 00:22:00.163 "enable_ktls": false 00:22:00.163 } 00:22:00.163 } 00:22:00.163 ] 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "subsystem": "vmd", 00:22:00.163 "config": [] 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "subsystem": "accel", 00:22:00.163 "config": [ 00:22:00.163 { 00:22:00.163 "method": "accel_set_options", 00:22:00.163 "params": { 00:22:00.163 "small_cache_size": 128, 00:22:00.163 "large_cache_size": 16, 00:22:00.163 "task_count": 2048, 00:22:00.163 "sequence_count": 2048, 00:22:00.163 "buf_count": 2048 00:22:00.163 } 00:22:00.163 } 00:22:00.163 ] 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "subsystem": "bdev", 00:22:00.163 "config": [ 00:22:00.163 { 00:22:00.163 "method": "bdev_set_options", 00:22:00.163 "params": { 00:22:00.163 "bdev_io_pool_size": 65535, 00:22:00.163 "bdev_io_cache_size": 256, 00:22:00.163 "bdev_auto_examine": true, 00:22:00.163 "iobuf_small_cache_size": 128, 00:22:00.163 "iobuf_large_cache_size": 16 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "bdev_raid_set_options", 00:22:00.163 "params": { 00:22:00.163 "process_window_size_kb": 1024, 00:22:00.163 "process_max_bandwidth_mb_sec": 0 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "bdev_iscsi_set_options", 00:22:00.163 "params": { 00:22:00.163 "timeout_sec": 30 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "bdev_nvme_set_options", 00:22:00.163 "params": { 00:22:00.163 "action_on_timeout": "none", 00:22:00.163 "timeout_us": 0, 00:22:00.163 "timeout_admin_us": 0, 00:22:00.163 "keep_alive_timeout_ms": 10000, 00:22:00.163 "arbitration_burst": 0, 00:22:00.163 "low_priority_weight": 0, 00:22:00.163 "medium_priority_weight": 0, 00:22:00.163 "high_priority_weight": 0, 00:22:00.163 "nvme_adminq_poll_period_us": 10000, 00:22:00.163 "nvme_ioq_poll_period_us": 0, 00:22:00.163 "io_queue_requests": 0, 00:22:00.163 "delay_cmd_submit": true, 00:22:00.163 "transport_retry_count": 4, 00:22:00.163 "bdev_retry_count": 3, 00:22:00.163 "transport_ack_timeout": 0, 00:22:00.163 "ctrlr_loss_timeout_sec": 0, 00:22:00.163 "reconnect_delay_sec": 0, 00:22:00.163 "fast_io_fail_timeout_sec": 0, 00:22:00.163 "disable_auto_failback": false, 00:22:00.163 "generate_uuids": false, 00:22:00.163 "transport_tos": 0, 00:22:00.163 "nvme_error_stat": false, 00:22:00.163 "rdma_srq_size": 0, 00:22:00.163 "io_path_stat": false, 00:22:00.163 "allow_accel_sequence": false, 00:22:00.163 "rdma_max_cq_size": 0, 00:22:00.163 "rdma_cm_event_timeout_ms": 0, 00:22:00.163 "dhchap_digests": [ 00:22:00.163 "sha256", 00:22:00.163 "sha384", 00:22:00.163 "sha512" 00:22:00.163 ], 00:22:00.163 "dhchap_dhgroups": [ 00:22:00.163 "null", 00:22:00.163 "ffdhe2048", 00:22:00.163 "ffdhe3072", 00:22:00.163 "ffdhe4096", 00:22:00.163 "ffdhe6144", 00:22:00.163 "ffdhe8192" 00:22:00.163 ] 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "bdev_nvme_set_hotplug", 00:22:00.163 "params": { 00:22:00.163 "period_us": 100000, 00:22:00.163 "enable": false 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "bdev_malloc_create", 00:22:00.163 "params": { 00:22:00.163 "name": "malloc0", 00:22:00.163 "num_blocks": 8192, 00:22:00.163 "block_size": 4096, 00:22:00.163 "physical_block_size": 4096, 00:22:00.163 "uuid": "05849fd9-d058-4415-b55a-2a0c0f1f2550", 00:22:00.163 "optimal_io_boundary": 0, 00:22:00.163 "md_size": 0, 00:22:00.163 "dif_type": 0, 00:22:00.163 "dif_is_head_of_md": false, 00:22:00.163 "dif_pi_format": 0 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "bdev_wait_for_examine" 00:22:00.163 } 00:22:00.163 ] 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "subsystem": "nbd", 00:22:00.163 "config": [] 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "subsystem": "scheduler", 00:22:00.163 "config": [ 00:22:00.163 { 00:22:00.163 "method": "framework_set_scheduler", 00:22:00.163 "params": { 00:22:00.163 "name": "static" 00:22:00.163 } 00:22:00.163 } 00:22:00.163 ] 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "subsystem": "nvmf", 00:22:00.163 "config": [ 00:22:00.163 { 00:22:00.163 "method": "nvmf_set_config", 00:22:00.163 "params": { 00:22:00.163 "discovery_filter": "match_any", 00:22:00.163 "admin_cmd_passthru": { 00:22:00.163 "identify_ctrlr": false 00:22:00.163 }, 00:22:00.163 "dhchap_digests": [ 00:22:00.163 "sha256", 00:22:00.163 "sha384", 00:22:00.163 "sha512" 00:22:00.163 ], 00:22:00.163 "dhchap_dhgroups": [ 00:22:00.163 "null", 00:22:00.163 "ffdhe2048", 00:22:00.163 "ffdhe3072", 00:22:00.163 "ffdhe4096", 00:22:00.163 "ffdhe6144", 00:22:00.163 "ffdhe8192" 00:22:00.163 ] 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "nvmf_set_max_subsystems", 00:22:00.163 "params": { 00:22:00.163 "max_subsystems": 1024 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "nvmf_set_crdt", 00:22:00.163 "params": { 00:22:00.163 "crdt1": 0, 00:22:00.163 "crdt2": 0, 00:22:00.163 "crdt3": 0 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "nvmf_create_transport", 00:22:00.163 "params": { 00:22:00.163 "trtype": "TCP", 00:22:00.163 "max_queue_depth": 128, 00:22:00.163 "max_io_qpairs_per_ctrlr": 127, 00:22:00.163 "in_capsule_data_size": 4096, 00:22:00.163 "max_io_size": 131072, 00:22:00.163 "io_unit_size": 131072, 00:22:00.163 "max_aq_depth": 128, 00:22:00.163 "num_shared_buffers": 511, 00:22:00.163 "buf_cache_size": 4294967295, 00:22:00.163 "dif_insert_or_strip": false, 00:22:00.163 "zcopy": false, 00:22:00.163 "c2h_success": false, 00:22:00.163 "sock_priority": 0, 00:22:00.163 "abort_timeout_sec": 1, 00:22:00.163 "ack_timeout": 0, 00:22:00.163 "data_wr_pool_size": 0 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "nvmf_create_subsystem", 00:22:00.163 "params": { 00:22:00.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.163 "allow_any_host": false, 00:22:00.163 "serial_number": "00000000000000000000", 00:22:00.163 "model_number": "SPDK bdev Controller", 00:22:00.163 "max_namespaces": 32, 00:22:00.163 "min_cntlid": 1, 00:22:00.163 "max_cntlid": 65519, 00:22:00.163 "ana_reporting": false 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "nvmf_subsystem_add_host", 00:22:00.163 "params": { 00:22:00.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.163 "host": "nqn.2016-06.io.spdk:host1", 00:22:00.163 "psk": "key0" 00:22:00.163 } 00:22:00.163 }, 00:22:00.163 { 00:22:00.163 "method": "nvmf_subsystem_add_ns", 00:22:00.163 "params": { 00:22:00.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.164 "namespace": { 00:22:00.164 "nsid": 1, 00:22:00.164 "bdev_name": "malloc0", 00:22:00.164 "nguid": "05849FD9D0584415B55A2A0C0F1F2550", 00:22:00.164 "uuid": "05849fd9-d058-4415-b55a-2a0c0f1f2550", 00:22:00.164 "no_auto_visible": false 00:22:00.164 } 00:22:00.164 } 00:22:00.164 }, 00:22:00.164 { 00:22:00.164 "method": "nvmf_subsystem_add_listener", 00:22:00.164 "params": { 00:22:00.164 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.164 "listen_address": { 00:22:00.164 "trtype": "TCP", 00:22:00.164 "adrfam": "IPv4", 00:22:00.164 "traddr": "10.0.0.2", 00:22:00.164 "trsvcid": "4420" 00:22:00.164 }, 00:22:00.164 "secure_channel": false, 00:22:00.164 "sock_impl": "ssl" 00:22:00.164 } 00:22:00.164 } 00:22:00.164 ] 00:22:00.164 } 00:22:00.164 ] 00:22:00.164 }' 00:22:00.164 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:00.164 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.164 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2569486 00:22:00.164 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:00.164 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2569486 00:22:00.164 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2569486 ']' 00:22:00.164 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.164 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:00.164 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.164 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:00.164 10:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.164 [2024-12-09 10:32:32.430839] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:22:00.164 [2024-12-09 10:32:32.430937] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.164 [2024-12-09 10:32:32.504037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.164 [2024-12-09 10:32:32.562719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.164 [2024-12-09 10:32:32.562773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.164 [2024-12-09 10:32:32.562802] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.164 [2024-12-09 10:32:32.562813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.164 [2024-12-09 10:32:32.562823] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.164 [2024-12-09 10:32:32.563547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.421 [2024-12-09 10:32:32.813530] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.421 [2024-12-09 10:32:32.845559] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:00.421 [2024-12-09 10:32:32.845835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2569638 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2569638 /var/tmp/bdevperf.sock 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2569638 ']' 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.352 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.353 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.353 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:01.353 "subsystems": [ 00:22:01.353 { 00:22:01.353 "subsystem": "keyring", 00:22:01.353 "config": [ 00:22:01.353 { 00:22:01.353 "method": "keyring_file_add_key", 00:22:01.353 "params": { 00:22:01.353 "name": "key0", 00:22:01.353 "path": "/tmp/tmp.n2jUW9F7x5" 00:22:01.353 } 00:22:01.353 } 00:22:01.353 ] 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "subsystem": "iobuf", 00:22:01.353 "config": [ 00:22:01.353 { 00:22:01.353 "method": "iobuf_set_options", 00:22:01.353 "params": { 00:22:01.353 "small_pool_count": 8192, 00:22:01.353 "large_pool_count": 1024, 00:22:01.353 "small_bufsize": 8192, 00:22:01.353 "large_bufsize": 135168, 00:22:01.353 "enable_numa": false 00:22:01.353 } 00:22:01.353 } 00:22:01.353 ] 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "subsystem": "sock", 00:22:01.353 "config": [ 00:22:01.353 { 00:22:01.353 "method": "sock_set_default_impl", 00:22:01.353 "params": { 00:22:01.353 "impl_name": "posix" 00:22:01.353 } 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "method": "sock_impl_set_options", 00:22:01.353 "params": { 00:22:01.353 "impl_name": "ssl", 00:22:01.353 "recv_buf_size": 4096, 00:22:01.353 "send_buf_size": 4096, 00:22:01.353 "enable_recv_pipe": true, 00:22:01.353 "enable_quickack": false, 00:22:01.353 "enable_placement_id": 0, 00:22:01.353 "enable_zerocopy_send_server": true, 00:22:01.353 "enable_zerocopy_send_client": false, 00:22:01.353 "zerocopy_threshold": 0, 00:22:01.353 "tls_version": 0, 00:22:01.353 "enable_ktls": false 00:22:01.353 } 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "method": "sock_impl_set_options", 00:22:01.353 "params": { 00:22:01.353 "impl_name": "posix", 00:22:01.353 "recv_buf_size": 2097152, 00:22:01.353 "send_buf_size": 2097152, 00:22:01.353 "enable_recv_pipe": true, 00:22:01.353 "enable_quickack": false, 00:22:01.353 "enable_placement_id": 0, 00:22:01.353 "enable_zerocopy_send_server": true, 00:22:01.353 "enable_zerocopy_send_client": false, 00:22:01.353 "zerocopy_threshold": 0, 00:22:01.353 "tls_version": 0, 00:22:01.353 "enable_ktls": false 00:22:01.353 } 00:22:01.353 } 00:22:01.353 ] 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "subsystem": "vmd", 00:22:01.353 "config": [] 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "subsystem": "accel", 00:22:01.353 "config": [ 00:22:01.353 { 00:22:01.353 "method": "accel_set_options", 00:22:01.353 "params": { 00:22:01.353 "small_cache_size": 128, 00:22:01.353 "large_cache_size": 16, 00:22:01.353 "task_count": 2048, 00:22:01.353 "sequence_count": 2048, 00:22:01.353 "buf_count": 2048 00:22:01.353 } 00:22:01.353 } 00:22:01.353 ] 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "subsystem": "bdev", 00:22:01.353 "config": [ 00:22:01.353 { 00:22:01.353 "method": "bdev_set_options", 00:22:01.353 "params": { 00:22:01.353 "bdev_io_pool_size": 65535, 00:22:01.353 "bdev_io_cache_size": 256, 00:22:01.353 "bdev_auto_examine": true, 00:22:01.353 "iobuf_small_cache_size": 128, 00:22:01.353 "iobuf_large_cache_size": 16 00:22:01.353 } 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "method": "bdev_raid_set_options", 00:22:01.353 "params": { 00:22:01.353 "process_window_size_kb": 1024, 00:22:01.353 "process_max_bandwidth_mb_sec": 0 00:22:01.353 } 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "method": "bdev_iscsi_set_options", 00:22:01.353 "params": { 00:22:01.353 "timeout_sec": 30 00:22:01.353 } 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "method": "bdev_nvme_set_options", 00:22:01.353 "params": { 00:22:01.353 "action_on_timeout": "none", 00:22:01.353 "timeout_us": 0, 00:22:01.353 "timeout_admin_us": 0, 00:22:01.353 "keep_alive_timeout_ms": 10000, 00:22:01.353 "arbitration_burst": 0, 00:22:01.353 "low_priority_weight": 0, 00:22:01.353 "medium_priority_weight": 0, 00:22:01.353 "high_priority_weight": 0, 00:22:01.353 "nvme_adminq_poll_period_us": 10000, 00:22:01.353 "nvme_ioq_poll_period_us": 0, 00:22:01.353 "io_queue_requests": 512, 00:22:01.353 "delay_cmd_submit": true, 00:22:01.353 "transport_retry_count": 4, 00:22:01.353 "bdev_retry_count": 3, 00:22:01.353 "transport_ack_timeout": 0, 00:22:01.353 "ctrlr_loss_timeout_sec": 0, 00:22:01.353 "reconnect_delay_sec": 0, 00:22:01.353 "fast_io_fail_timeout_sec": 0, 00:22:01.353 "disable_auto_failback": false, 00:22:01.353 "generate_uuids": false, 00:22:01.353 "transport_tos": 0, 00:22:01.353 "nvme_error_stat": false, 00:22:01.353 "rdma_srq_size": 0, 00:22:01.353 "io_path_stat": false, 00:22:01.353 "allow_accel_sequence": false, 00:22:01.353 "rdma_max_cq_size": 0, 00:22:01.353 "rdma_cm_event_timeout_ms": 0, 00:22:01.353 "dhchap_digests": [ 00:22:01.353 "sha256", 00:22:01.353 "sha384", 00:22:01.353 "sha512" 00:22:01.353 ], 00:22:01.353 "dhchap_dhgroups": [ 00:22:01.353 "null", 00:22:01.353 "ffdhe2048", 00:22:01.353 "ffdhe3072", 00:22:01.353 "ffdhe4096", 00:22:01.353 "ffdhe6144", 00:22:01.353 "ffdhe8192" 00:22:01.353 ] 00:22:01.353 } 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "method": "bdev_nvme_attach_controller", 00:22:01.353 "params": { 00:22:01.353 "name": "nvme0", 00:22:01.353 "trtype": "TCP", 00:22:01.353 "adrfam": "IPv4", 00:22:01.353 "traddr": "10.0.0.2", 00:22:01.353 "trsvcid": "4420", 00:22:01.353 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.353 "prchk_reftag": false, 00:22:01.353 "prchk_guard": false, 00:22:01.353 "ctrlr_loss_timeout_sec": 0, 00:22:01.353 "reconnect_delay_sec": 0, 00:22:01.353 "fast_io_fail_timeout_sec": 0, 00:22:01.353 "psk": "key0", 00:22:01.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.353 "hdgst": false, 00:22:01.353 "ddgst": false, 00:22:01.353 "multipath": "multipath" 00:22:01.353 } 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "method": "bdev_nvme_set_hotplug", 00:22:01.353 "params": { 00:22:01.353 "period_us": 100000, 00:22:01.353 "enable": false 00:22:01.353 } 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "method": "bdev_enable_histogram", 00:22:01.353 "params": { 00:22:01.353 "name": "nvme0n1", 00:22:01.353 "enable": true 00:22:01.353 } 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "method": "bdev_wait_for_examine" 00:22:01.353 } 00:22:01.353 ] 00:22:01.353 }, 00:22:01.353 { 00:22:01.353 "subsystem": "nbd", 00:22:01.353 "config": [] 00:22:01.353 } 00:22:01.353 ] 00:22:01.353 }' 00:22:01.353 [2024-12-09 10:32:33.558429] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:22:01.353 [2024-12-09 10:32:33.558531] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569638 ] 00:22:01.353 [2024-12-09 10:32:33.623359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.353 [2024-12-09 10:32:33.682053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.611 [2024-12-09 10:32:33.862937] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:01.611 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:01.611 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:01.611 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:01.611 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:01.868 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.868 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.126 Running I/O for 1 seconds... 00:22:03.058 3312.00 IOPS, 12.94 MiB/s 00:22:03.058 Latency(us) 00:22:03.058 [2024-12-09T09:32:35.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.058 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:03.058 Verification LBA range: start 0x0 length 0x2000 00:22:03.058 nvme0n1 : 1.02 3385.07 13.22 0.00 0.00 37546.74 6019.60 35535.08 00:22:03.058 [2024-12-09T09:32:35.499Z] =================================================================================================================== 00:22:03.058 [2024-12-09T09:32:35.499Z] Total : 3385.07 13.22 0.00 0.00 37546.74 6019.60 35535.08 00:22:03.058 { 00:22:03.058 "results": [ 00:22:03.058 { 00:22:03.058 "job": "nvme0n1", 00:22:03.058 "core_mask": "0x2", 00:22:03.058 "workload": "verify", 00:22:03.058 "status": "finished", 00:22:03.058 "verify_range": { 00:22:03.058 "start": 0, 00:22:03.058 "length": 8192 00:22:03.058 }, 00:22:03.058 "queue_depth": 128, 00:22:03.058 "io_size": 4096, 00:22:03.058 "runtime": 1.016228, 00:22:03.058 "iops": 3385.067130604549, 00:22:03.058 "mibps": 13.22291847892402, 00:22:03.058 "io_failed": 0, 00:22:03.058 "io_timeout": 0, 00:22:03.058 "avg_latency_us": 37546.74428251507, 00:22:03.058 "min_latency_us": 6019.602962962963, 00:22:03.058 "max_latency_us": 35535.07555555556 00:22:03.058 } 00:22:03.058 ], 00:22:03.058 "core_count": 1 00:22:03.058 } 00:22:03.058 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:03.058 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:03.058 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:03.058 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:03.059 nvmf_trace.0 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2569638 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2569638 ']' 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2569638 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.059 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2569638 00:22:03.316 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:03.316 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:03.316 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2569638' 00:22:03.316 killing process with pid 2569638 00:22:03.316 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2569638 00:22:03.316 Received shutdown signal, test time was about 1.000000 seconds 00:22:03.316 00:22:03.316 Latency(us) 00:22:03.316 [2024-12-09T09:32:35.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.316 [2024-12-09T09:32:35.757Z] =================================================================================================================== 00:22:03.316 [2024-12-09T09:32:35.757Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.316 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2569638 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:03.572 rmmod nvme_tcp 00:22:03.572 rmmod nvme_fabrics 00:22:03.572 rmmod nvme_keyring 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2569486 ']' 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2569486 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2569486 ']' 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2569486 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2569486 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2569486' 00:22:03.572 killing process with pid 2569486 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2569486 00:22:03.572 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2569486 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.829 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.jgljIHTMNK /tmp/tmp.Z2o7F0zOK7 /tmp/tmp.n2jUW9F7x5 00:22:06.366 00:22:06.366 real 1m24.148s 00:22:06.366 user 2m17.646s 00:22:06.366 sys 0m26.258s 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.366 ************************************ 00:22:06.366 END TEST nvmf_tls 00:22:06.366 ************************************ 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:06.366 ************************************ 00:22:06.366 START TEST nvmf_fips 00:22:06.366 ************************************ 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:06.366 * Looking for test storage... 00:22:06.366 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:06.366 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:06.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.367 --rc genhtml_branch_coverage=1 00:22:06.367 --rc genhtml_function_coverage=1 00:22:06.367 --rc genhtml_legend=1 00:22:06.367 --rc geninfo_all_blocks=1 00:22:06.367 --rc geninfo_unexecuted_blocks=1 00:22:06.367 00:22:06.367 ' 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:06.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.367 --rc genhtml_branch_coverage=1 00:22:06.367 --rc genhtml_function_coverage=1 00:22:06.367 --rc genhtml_legend=1 00:22:06.367 --rc geninfo_all_blocks=1 00:22:06.367 --rc geninfo_unexecuted_blocks=1 00:22:06.367 00:22:06.367 ' 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:06.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.367 --rc genhtml_branch_coverage=1 00:22:06.367 --rc genhtml_function_coverage=1 00:22:06.367 --rc genhtml_legend=1 00:22:06.367 --rc geninfo_all_blocks=1 00:22:06.367 --rc geninfo_unexecuted_blocks=1 00:22:06.367 00:22:06.367 ' 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:06.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:06.367 --rc genhtml_branch_coverage=1 00:22:06.367 --rc genhtml_function_coverage=1 00:22:06.367 --rc genhtml_legend=1 00:22:06.367 --rc geninfo_all_blocks=1 00:22:06.367 --rc geninfo_unexecuted_blocks=1 00:22:06.367 00:22:06.367 ' 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:06.367 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.367 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:06.368 Error setting digest 00:22:06.368 4062D496AC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:06.368 4062D496AC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:06.368 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:08.906 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:08.907 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:08.907 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:08.907 Found net devices under 0000:09:00.0: cvl_0_0 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:08.907 Found net devices under 0000:09:00.1: cvl_0_1 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:08.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:08.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:22:08.907 00:22:08.907 --- 10.0.0.2 ping statistics --- 00:22:08.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.907 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:08.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:08.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:22:08.907 00:22:08.907 --- 10.0.0.1 ping statistics --- 00:22:08.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:08.907 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2571997 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2571997 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2571997 ']' 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:08.907 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.908 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:08.908 10:32:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:08.908 [2024-12-09 10:32:40.980933] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:22:08.908 [2024-12-09 10:32:40.981026] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.908 [2024-12-09 10:32:41.051570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.908 [2024-12-09 10:32:41.107443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.908 [2024-12-09 10:32:41.107515] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.908 [2024-12-09 10:32:41.107529] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:08.908 [2024-12-09 10:32:41.107541] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:08.908 [2024-12-09 10:32:41.107551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.908 [2024-12-09 10:32:41.108159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Fju 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Fju 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Fju 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Fju 00:22:08.908 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:09.166 [2024-12-09 10:32:41.494366] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.166 [2024-12-09 10:32:41.510355] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:09.166 [2024-12-09 10:32:41.510625] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.166 malloc0 00:22:09.166 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:09.166 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2572033 00:22:09.166 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:09.166 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2572033 /var/tmp/bdevperf.sock 00:22:09.166 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2572033 ']' 00:22:09.166 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.166 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.166 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.166 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.166 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.425 [2024-12-09 10:32:41.638707] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:22:09.425 [2024-12-09 10:32:41.638792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572033 ] 00:22:09.425 [2024-12-09 10:32:41.705029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.425 [2024-12-09 10:32:41.763403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:09.683 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.683 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:09.683 10:32:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Fju 00:22:09.941 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:10.199 [2024-12-09 10:32:42.485953] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:10.199 TLSTESTn1 00:22:10.199 10:32:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:10.457 Running I/O for 10 seconds... 00:22:12.325 3330.00 IOPS, 13.01 MiB/s [2024-12-09T09:32:45.699Z] 3450.00 IOPS, 13.48 MiB/s [2024-12-09T09:32:47.074Z] 3499.67 IOPS, 13.67 MiB/s [2024-12-09T09:32:48.011Z] 3487.00 IOPS, 13.62 MiB/s [2024-12-09T09:32:48.943Z] 3498.00 IOPS, 13.66 MiB/s [2024-12-09T09:32:49.875Z] 3517.17 IOPS, 13.74 MiB/s [2024-12-09T09:32:50.809Z] 3521.43 IOPS, 13.76 MiB/s [2024-12-09T09:32:51.741Z] 3525.88 IOPS, 13.77 MiB/s [2024-12-09T09:32:53.109Z] 3527.44 IOPS, 13.78 MiB/s [2024-12-09T09:32:53.109Z] 3524.50 IOPS, 13.77 MiB/s 00:22:20.668 Latency(us) 00:22:20.668 [2024-12-09T09:32:53.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.668 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:20.668 Verification LBA range: start 0x0 length 0x2000 00:22:20.668 TLSTESTn1 : 10.03 3526.17 13.77 0.00 0.00 36222.66 10874.12 36700.16 00:22:20.668 [2024-12-09T09:32:53.109Z] =================================================================================================================== 00:22:20.668 [2024-12-09T09:32:53.109Z] Total : 3526.17 13.77 0.00 0.00 36222.66 10874.12 36700.16 00:22:20.668 { 00:22:20.668 "results": [ 00:22:20.668 { 00:22:20.668 "job": "TLSTESTn1", 00:22:20.668 "core_mask": "0x4", 00:22:20.668 "workload": "verify", 00:22:20.668 "status": "finished", 00:22:20.668 "verify_range": { 00:22:20.668 "start": 0, 00:22:20.668 "length": 8192 00:22:20.668 }, 00:22:20.668 "queue_depth": 128, 00:22:20.668 "io_size": 4096, 00:22:20.668 "runtime": 10.031278, 00:22:20.668 "iops": 3526.170842837772, 00:22:20.668 "mibps": 13.774104854835047, 00:22:20.668 "io_failed": 0, 00:22:20.668 "io_timeout": 0, 00:22:20.668 "avg_latency_us": 36222.659254400845, 00:22:20.668 "min_latency_us": 10874.121481481481, 00:22:20.668 "max_latency_us": 36700.16 00:22:20.668 } 00:22:20.668 ], 00:22:20.668 "core_count": 1 00:22:20.668 } 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:20.668 nvmf_trace.0 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2572033 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2572033 ']' 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2572033 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2572033 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2572033' 00:22:20.668 killing process with pid 2572033 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2572033 00:22:20.668 Received shutdown signal, test time was about 10.000000 seconds 00:22:20.668 00:22:20.668 Latency(us) 00:22:20.668 [2024-12-09T09:32:53.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:20.668 [2024-12-09T09:32:53.109Z] =================================================================================================================== 00:22:20.668 [2024-12-09T09:32:53.109Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:20.668 10:32:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2572033 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:20.925 rmmod nvme_tcp 00:22:20.925 rmmod nvme_fabrics 00:22:20.925 rmmod nvme_keyring 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2571997 ']' 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2571997 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2571997 ']' 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2571997 00:22:20.925 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:20.926 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:20.926 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2571997 00:22:20.926 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:20.926 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:20.926 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2571997' 00:22:20.926 killing process with pid 2571997 00:22:20.926 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2571997 00:22:20.926 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2571997 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.183 10:32:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.717 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:23.717 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Fju 00:22:23.717 00:22:23.717 real 0m17.331s 00:22:23.717 user 0m22.272s 00:22:23.717 sys 0m5.976s 00:22:23.717 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:23.717 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:23.717 ************************************ 00:22:23.717 END TEST nvmf_fips 00:22:23.717 ************************************ 00:22:23.717 10:32:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:23.718 ************************************ 00:22:23.718 START TEST nvmf_control_msg_list 00:22:23.718 ************************************ 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:23.718 * Looking for test storage... 00:22:23.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:23.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.718 --rc genhtml_branch_coverage=1 00:22:23.718 --rc genhtml_function_coverage=1 00:22:23.718 --rc genhtml_legend=1 00:22:23.718 --rc geninfo_all_blocks=1 00:22:23.718 --rc geninfo_unexecuted_blocks=1 00:22:23.718 00:22:23.718 ' 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:23.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.718 --rc genhtml_branch_coverage=1 00:22:23.718 --rc genhtml_function_coverage=1 00:22:23.718 --rc genhtml_legend=1 00:22:23.718 --rc geninfo_all_blocks=1 00:22:23.718 --rc geninfo_unexecuted_blocks=1 00:22:23.718 00:22:23.718 ' 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:23.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.718 --rc genhtml_branch_coverage=1 00:22:23.718 --rc genhtml_function_coverage=1 00:22:23.718 --rc genhtml_legend=1 00:22:23.718 --rc geninfo_all_blocks=1 00:22:23.718 --rc geninfo_unexecuted_blocks=1 00:22:23.718 00:22:23.718 ' 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:23.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:23.718 --rc genhtml_branch_coverage=1 00:22:23.718 --rc genhtml_function_coverage=1 00:22:23.718 --rc genhtml_legend=1 00:22:23.718 --rc geninfo_all_blocks=1 00:22:23.718 --rc geninfo_unexecuted_blocks=1 00:22:23.718 00:22:23.718 ' 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:23.718 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:23.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:23.719 10:32:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:25.619 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.619 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:25.619 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:25.619 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:25.619 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:25.619 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:25.619 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:25.619 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:25.619 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:25.620 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:25.620 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:25.620 Found net devices under 0000:09:00.0: cvl_0_0 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:25.620 Found net devices under 0000:09:00.1: cvl_0_1 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.620 10:32:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.620 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.620 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.620 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:25.620 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:25.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:25.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:22:25.880 00:22:25.880 --- 10.0.0.2 ping statistics --- 00:22:25.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.880 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:25.880 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:25.880 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:22:25.880 00:22:25.880 --- 10.0.0.1 ping statistics --- 00:22:25.880 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:25.880 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2575411 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2575411 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2575411 ']' 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:25.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:25.880 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:25.880 [2024-12-09 10:32:58.168756] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:22:25.880 [2024-12-09 10:32:58.168845] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:25.880 [2024-12-09 10:32:58.238636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.880 [2024-12-09 10:32:58.294421] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:25.880 [2024-12-09 10:32:58.294487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:25.880 [2024-12-09 10:32:58.294500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:25.880 [2024-12-09 10:32:58.294525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:25.880 [2024-12-09 10:32:58.294536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:25.880 [2024-12-09 10:32:58.295137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.139 [2024-12-09 10:32:58.445225] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.139 Malloc0 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:26.139 [2024-12-09 10:32:58.484743] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2575437 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.139 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2575438 00:22:26.140 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.140 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2575439 00:22:26.140 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2575437 00:22:26.140 10:32:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:26.140 [2024-12-09 10:32:58.563712] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:26.140 [2024-12-09 10:32:58.564051] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:26.140 [2024-12-09 10:32:58.564334] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:27.515 Initializing NVMe Controllers 00:22:27.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:27.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:27.515 Initialization complete. Launching workers. 00:22:27.515 ======================================================== 00:22:27.515 Latency(us) 00:22:27.515 Device Information : IOPS MiB/s Average min max 00:22:27.515 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4055.96 15.84 246.14 177.67 827.75 00:22:27.515 ======================================================== 00:22:27.515 Total : 4055.96 15.84 246.14 177.67 827.75 00:22:27.515 00:22:27.515 Initializing NVMe Controllers 00:22:27.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:27.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:27.515 Initialization complete. Launching workers. 00:22:27.515 ======================================================== 00:22:27.515 Latency(us) 00:22:27.515 Device Information : IOPS MiB/s Average min max 00:22:27.515 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4054.96 15.84 246.26 154.43 604.69 00:22:27.515 ======================================================== 00:22:27.515 Total : 4054.96 15.84 246.26 154.43 604.69 00:22:27.515 00:22:27.515 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2575438 00:22:27.515 Initializing NVMe Controllers 00:22:27.515 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:27.515 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:27.515 Initialization complete. Launching workers. 00:22:27.515 ======================================================== 00:22:27.515 Latency(us) 00:22:27.515 Device Information : IOPS MiB/s Average min max 00:22:27.515 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40887.74 40567.30 41009.72 00:22:27.515 ======================================================== 00:22:27.515 Total : 25.00 0.10 40887.74 40567.30 41009.72 00:22:27.515 00:22:27.515 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2575439 00:22:27.515 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:27.515 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:27.515 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:27.515 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:22:27.515 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:27.515 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:22:27.515 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:27.515 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:27.515 rmmod nvme_tcp 00:22:27.515 rmmod nvme_fabrics 00:22:27.515 rmmod nvme_keyring 00:22:27.774 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:27.774 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:22:27.774 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:22:27.774 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2575411 ']' 00:22:27.774 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2575411 00:22:27.774 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2575411 ']' 00:22:27.774 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2575411 00:22:27.774 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:27.774 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:27.774 10:32:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2575411 00:22:27.774 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:27.774 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:27.774 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2575411' 00:22:27.774 killing process with pid 2575411 00:22:27.774 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2575411 00:22:27.774 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2575411 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.033 10:33:00 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.941 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:29.941 00:22:29.941 real 0m6.692s 00:22:29.941 user 0m5.951s 00:22:29.941 sys 0m2.847s 00:22:29.941 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.941 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:29.941 ************************************ 00:22:29.941 END TEST nvmf_control_msg_list 00:22:29.941 ************************************ 00:22:29.941 10:33:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:29.941 10:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:29.941 10:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.941 10:33:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:29.941 ************************************ 00:22:29.941 START TEST nvmf_wait_for_buf 00:22:29.941 ************************************ 00:22:29.941 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:30.200 * Looking for test storage... 00:22:30.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:30.200 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:30.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.201 --rc genhtml_branch_coverage=1 00:22:30.201 --rc genhtml_function_coverage=1 00:22:30.201 --rc genhtml_legend=1 00:22:30.201 --rc geninfo_all_blocks=1 00:22:30.201 --rc geninfo_unexecuted_blocks=1 00:22:30.201 00:22:30.201 ' 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:30.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.201 --rc genhtml_branch_coverage=1 00:22:30.201 --rc genhtml_function_coverage=1 00:22:30.201 --rc genhtml_legend=1 00:22:30.201 --rc geninfo_all_blocks=1 00:22:30.201 --rc geninfo_unexecuted_blocks=1 00:22:30.201 00:22:30.201 ' 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:30.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.201 --rc genhtml_branch_coverage=1 00:22:30.201 --rc genhtml_function_coverage=1 00:22:30.201 --rc genhtml_legend=1 00:22:30.201 --rc geninfo_all_blocks=1 00:22:30.201 --rc geninfo_unexecuted_blocks=1 00:22:30.201 00:22:30.201 ' 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:30.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.201 --rc genhtml_branch_coverage=1 00:22:30.201 --rc genhtml_function_coverage=1 00:22:30.201 --rc genhtml_legend=1 00:22:30.201 --rc geninfo_all_blocks=1 00:22:30.201 --rc geninfo_unexecuted_blocks=1 00:22:30.201 00:22:30.201 ' 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:30.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:22:30.201 10:33:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:32.791 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:32.791 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.791 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:32.792 Found net devices under 0000:09:00.0: cvl_0_0 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:32.792 Found net devices under 0000:09:00.1: cvl_0_1 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:32.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:32.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:22:32.792 00:22:32.792 --- 10.0.0.2 ping statistics --- 00:22:32.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.792 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:32.792 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:32.792 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:22:32.792 00:22:32.792 --- 10.0.0.1 ping statistics --- 00:22:32.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:32.792 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2577750 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2577750 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2577750 ']' 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.792 10:33:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.792 [2024-12-09 10:33:04.959068] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:22:32.792 [2024-12-09 10:33:04.959169] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.792 [2024-12-09 10:33:05.031310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.792 [2024-12-09 10:33:05.086749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.792 [2024-12-09 10:33:05.086801] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.792 [2024-12-09 10:33:05.086824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.792 [2024-12-09 10:33:05.086835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.792 [2024-12-09 10:33:05.086844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.792 [2024-12-09 10:33:05.087520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.792 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:32.792 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:32.792 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:32.792 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:32.792 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.792 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.792 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:32.792 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:32.792 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:32.792 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.792 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.793 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.793 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:32.793 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.793 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:32.793 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.793 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:32.793 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.793 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:33.050 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.050 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:33.050 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.050 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:33.050 Malloc0 00:22:33.050 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.050 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:33.051 [2024-12-09 10:33:05.327440] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:33.051 [2024-12-09 10:33:05.351664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.051 10:33:05 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:33.051 [2024-12-09 10:33:05.431292] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:34.997 Initializing NVMe Controllers 00:22:34.997 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:34.997 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:34.997 Initialization complete. Launching workers. 00:22:34.997 ======================================================== 00:22:34.997 Latency(us) 00:22:34.997 Device Information : IOPS MiB/s Average min max 00:22:34.997 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 58.97 7.37 70269.26 31896.17 151644.51 00:22:34.997 ======================================================== 00:22:34.997 Total : 58.97 7.37 70269.26 31896.17 151644.51 00:22:34.997 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=918 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 918 -eq 0 ]] 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.997 rmmod nvme_tcp 00:22:34.997 rmmod nvme_fabrics 00:22:34.997 rmmod nvme_keyring 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2577750 ']' 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2577750 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2577750 ']' 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2577750 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2577750 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2577750' 00:22:34.997 killing process with pid 2577750 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2577750 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2577750 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.997 10:33:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.530 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:37.531 00:22:37.531 real 0m7.099s 00:22:37.531 user 0m3.439s 00:22:37.531 sys 0m2.128s 00:22:37.531 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:37.531 10:33:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:37.531 ************************************ 00:22:37.531 END TEST nvmf_wait_for_buf 00:22:37.531 ************************************ 00:22:37.531 10:33:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:37.531 10:33:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:37.531 10:33:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:37.531 10:33:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:37.531 10:33:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:22:37.531 10:33:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:39.438 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:39.438 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:39.438 Found net devices under 0000:09:00.0: cvl_0_0 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.438 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:39.438 Found net devices under 0000:09:00.1: cvl_0_1 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:39.439 ************************************ 00:22:39.439 START TEST nvmf_perf_adq 00:22:39.439 ************************************ 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:39.439 * Looking for test storage... 00:22:39.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:39.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.439 --rc genhtml_branch_coverage=1 00:22:39.439 --rc genhtml_function_coverage=1 00:22:39.439 --rc genhtml_legend=1 00:22:39.439 --rc geninfo_all_blocks=1 00:22:39.439 --rc geninfo_unexecuted_blocks=1 00:22:39.439 00:22:39.439 ' 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:39.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.439 --rc genhtml_branch_coverage=1 00:22:39.439 --rc genhtml_function_coverage=1 00:22:39.439 --rc genhtml_legend=1 00:22:39.439 --rc geninfo_all_blocks=1 00:22:39.439 --rc geninfo_unexecuted_blocks=1 00:22:39.439 00:22:39.439 ' 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:39.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.439 --rc genhtml_branch_coverage=1 00:22:39.439 --rc genhtml_function_coverage=1 00:22:39.439 --rc genhtml_legend=1 00:22:39.439 --rc geninfo_all_blocks=1 00:22:39.439 --rc geninfo_unexecuted_blocks=1 00:22:39.439 00:22:39.439 ' 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:39.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.439 --rc genhtml_branch_coverage=1 00:22:39.439 --rc genhtml_function_coverage=1 00:22:39.439 --rc genhtml_legend=1 00:22:39.439 --rc geninfo_all_blocks=1 00:22:39.439 --rc geninfo_unexecuted_blocks=1 00:22:39.439 00:22:39.439 ' 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.439 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.440 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.440 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.440 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.440 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:39.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:39.440 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.440 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.440 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:39.440 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:39.440 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.440 10:33:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:41.968 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.968 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.968 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.968 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.968 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.968 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.968 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.968 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.968 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.968 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:41.969 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:41.969 10:33:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:41.969 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:41.969 Found net devices under 0000:09:00.0: cvl_0_0 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:41.969 Found net devices under 0000:09:00.1: cvl_0_1 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:41.969 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:42.226 10:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:44.120 10:33:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:49.396 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:49.397 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:49.397 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:49.397 Found net devices under 0000:09:00.0: cvl_0_0 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:49.397 Found net devices under 0000:09:00.1: cvl_0_1 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:49.397 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:49.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:49.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:22:49.397 00:22:49.398 --- 10.0.0.2 ping statistics --- 00:22:49.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.398 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:49.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:49.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:22:49.398 00:22:49.398 --- 10.0.0.1 ping statistics --- 00:22:49.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:49.398 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2582987 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2582987 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2582987 ']' 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:49.398 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.398 [2024-12-09 10:33:21.797716] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:22:49.398 [2024-12-09 10:33:21.797794] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.655 [2024-12-09 10:33:21.871920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:49.655 [2024-12-09 10:33:21.933876] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.655 [2024-12-09 10:33:21.933946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.655 [2024-12-09 10:33:21.933975] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.655 [2024-12-09 10:33:21.933986] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.655 [2024-12-09 10:33:21.933996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.655 [2024-12-09 10:33:21.935692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.655 [2024-12-09 10:33:21.935754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:49.655 [2024-12-09 10:33:21.935807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:49.655 [2024-12-09 10:33:21.935810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.655 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.912 [2024-12-09 10:33:22.200849] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.912 Malloc1 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.912 [2024-12-09 10:33:22.259296] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2583142 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:49.912 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:52.434 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:52.434 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.434 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:52.434 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.434 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:52.434 "tick_rate": 2700000000, 00:22:52.434 "poll_groups": [ 00:22:52.434 { 00:22:52.434 "name": "nvmf_tgt_poll_group_000", 00:22:52.434 "admin_qpairs": 1, 00:22:52.434 "io_qpairs": 1, 00:22:52.434 "current_admin_qpairs": 1, 00:22:52.434 "current_io_qpairs": 1, 00:22:52.434 "pending_bdev_io": 0, 00:22:52.434 "completed_nvme_io": 18382, 00:22:52.434 "transports": [ 00:22:52.434 { 00:22:52.434 "trtype": "TCP" 00:22:52.434 } 00:22:52.434 ] 00:22:52.434 }, 00:22:52.434 { 00:22:52.434 "name": "nvmf_tgt_poll_group_001", 00:22:52.434 "admin_qpairs": 0, 00:22:52.434 "io_qpairs": 1, 00:22:52.434 "current_admin_qpairs": 0, 00:22:52.434 "current_io_qpairs": 1, 00:22:52.434 "pending_bdev_io": 0, 00:22:52.434 "completed_nvme_io": 18161, 00:22:52.434 "transports": [ 00:22:52.434 { 00:22:52.434 "trtype": "TCP" 00:22:52.434 } 00:22:52.434 ] 00:22:52.434 }, 00:22:52.434 { 00:22:52.434 "name": "nvmf_tgt_poll_group_002", 00:22:52.434 "admin_qpairs": 0, 00:22:52.434 "io_qpairs": 1, 00:22:52.434 "current_admin_qpairs": 0, 00:22:52.434 "current_io_qpairs": 1, 00:22:52.434 "pending_bdev_io": 0, 00:22:52.434 "completed_nvme_io": 17753, 00:22:52.434 "transports": [ 00:22:52.434 { 00:22:52.434 "trtype": "TCP" 00:22:52.434 } 00:22:52.434 ] 00:22:52.434 }, 00:22:52.434 { 00:22:52.434 "name": "nvmf_tgt_poll_group_003", 00:22:52.434 "admin_qpairs": 0, 00:22:52.434 "io_qpairs": 1, 00:22:52.434 "current_admin_qpairs": 0, 00:22:52.434 "current_io_qpairs": 1, 00:22:52.434 "pending_bdev_io": 0, 00:22:52.434 "completed_nvme_io": 18783, 00:22:52.434 "transports": [ 00:22:52.434 { 00:22:52.434 "trtype": "TCP" 00:22:52.434 } 00:22:52.434 ] 00:22:52.434 } 00:22:52.434 ] 00:22:52.434 }' 00:22:52.434 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:52.434 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:52.434 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:52.434 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:52.434 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2583142 00:23:00.542 Initializing NVMe Controllers 00:23:00.542 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:00.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:00.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:00.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:00.542 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:00.542 Initialization complete. Launching workers. 00:23:00.542 ======================================================== 00:23:00.542 Latency(us) 00:23:00.542 Device Information : IOPS MiB/s Average min max 00:23:00.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10531.80 41.14 6076.63 2489.50 9621.40 00:23:00.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10249.20 40.04 6245.94 2456.54 9837.23 00:23:00.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10043.50 39.23 6372.26 2583.70 11031.83 00:23:00.542 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10224.20 39.94 6261.08 2459.00 11376.33 00:23:00.542 ======================================================== 00:23:00.542 Total : 41048.70 160.35 6237.18 2456.54 11376.33 00:23:00.542 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:00.543 rmmod nvme_tcp 00:23:00.543 rmmod nvme_fabrics 00:23:00.543 rmmod nvme_keyring 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2582987 ']' 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2582987 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2582987 ']' 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2582987 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2582987 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2582987' 00:23:00.543 killing process with pid 2582987 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2582987 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2582987 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.543 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.075 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:03.075 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:03.075 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:03.075 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:03.333 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:05.235 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:10.535 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:10.535 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:10.535 Found net devices under 0000:09:00.0: cvl_0_0 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:10.535 Found net devices under 0000:09:00.1: cvl_0_1 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:10.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:23:10.535 00:23:10.535 --- 10.0.0.2 ping statistics --- 00:23:10.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.535 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:23:10.535 00:23:10.535 --- 10.0.0.1 ping statistics --- 00:23:10.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.535 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:10.535 net.core.busy_poll = 1 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:10.535 net.core.busy_read = 1 00:23:10.535 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:10.536 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:10.536 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:10.536 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:10.536 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2585762 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2585762 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2585762 ']' 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:10.794 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:10.794 [2024-12-09 10:33:43.046699] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:23:10.794 [2024-12-09 10:33:43.046805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.794 [2024-12-09 10:33:43.124164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:10.794 [2024-12-09 10:33:43.183914] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.794 [2024-12-09 10:33:43.183975] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.794 [2024-12-09 10:33:43.184012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.794 [2024-12-09 10:33:43.184024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.794 [2024-12-09 10:33:43.184034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.794 [2024-12-09 10:33:43.185674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:10.794 [2024-12-09 10:33:43.185732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.794 [2024-12-09 10:33:43.185798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.794 [2024-12-09 10:33:43.185801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.052 [2024-12-09 10:33:43.479263] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.052 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.323 Malloc1 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:11.323 [2024-12-09 10:33:43.545560] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2585806 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:11.323 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:13.223 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:13.223 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.223 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.223 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.223 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:13.223 "tick_rate": 2700000000, 00:23:13.223 "poll_groups": [ 00:23:13.223 { 00:23:13.223 "name": "nvmf_tgt_poll_group_000", 00:23:13.223 "admin_qpairs": 1, 00:23:13.223 "io_qpairs": 3, 00:23:13.223 "current_admin_qpairs": 1, 00:23:13.223 "current_io_qpairs": 3, 00:23:13.223 "pending_bdev_io": 0, 00:23:13.223 "completed_nvme_io": 25584, 00:23:13.223 "transports": [ 00:23:13.223 { 00:23:13.223 "trtype": "TCP" 00:23:13.223 } 00:23:13.223 ] 00:23:13.223 }, 00:23:13.223 { 00:23:13.223 "name": "nvmf_tgt_poll_group_001", 00:23:13.223 "admin_qpairs": 0, 00:23:13.223 "io_qpairs": 1, 00:23:13.223 "current_admin_qpairs": 0, 00:23:13.223 "current_io_qpairs": 1, 00:23:13.223 "pending_bdev_io": 0, 00:23:13.223 "completed_nvme_io": 25079, 00:23:13.223 "transports": [ 00:23:13.223 { 00:23:13.223 "trtype": "TCP" 00:23:13.223 } 00:23:13.223 ] 00:23:13.223 }, 00:23:13.223 { 00:23:13.223 "name": "nvmf_tgt_poll_group_002", 00:23:13.223 "admin_qpairs": 0, 00:23:13.223 "io_qpairs": 0, 00:23:13.223 "current_admin_qpairs": 0, 00:23:13.223 "current_io_qpairs": 0, 00:23:13.223 "pending_bdev_io": 0, 00:23:13.223 "completed_nvme_io": 0, 00:23:13.223 "transports": [ 00:23:13.223 { 00:23:13.223 "trtype": "TCP" 00:23:13.223 } 00:23:13.223 ] 00:23:13.223 }, 00:23:13.223 { 00:23:13.223 "name": "nvmf_tgt_poll_group_003", 00:23:13.223 "admin_qpairs": 0, 00:23:13.223 "io_qpairs": 0, 00:23:13.223 "current_admin_qpairs": 0, 00:23:13.223 "current_io_qpairs": 0, 00:23:13.223 "pending_bdev_io": 0, 00:23:13.223 "completed_nvme_io": 0, 00:23:13.224 "transports": [ 00:23:13.224 { 00:23:13.224 "trtype": "TCP" 00:23:13.224 } 00:23:13.224 ] 00:23:13.224 } 00:23:13.224 ] 00:23:13.224 }' 00:23:13.224 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:13.224 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:13.224 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:13.224 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:13.224 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2585806 00:23:21.329 Initializing NVMe Controllers 00:23:21.329 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:21.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:21.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:21.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:21.329 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:21.329 Initialization complete. Launching workers. 00:23:21.329 ======================================================== 00:23:21.329 Latency(us) 00:23:21.329 Device Information : IOPS MiB/s Average min max 00:23:21.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4105.10 16.04 15653.10 1876.30 63392.91 00:23:21.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13247.40 51.75 4831.00 1245.63 7292.95 00:23:21.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4799.90 18.75 13377.64 2049.10 63822.71 00:23:21.329 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4613.00 18.02 13926.90 1184.91 62053.91 00:23:21.329 ======================================================== 00:23:21.329 Total : 26765.40 104.55 9591.18 1184.91 63822.71 00:23:21.329 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.587 rmmod nvme_tcp 00:23:21.587 rmmod nvme_fabrics 00:23:21.587 rmmod nvme_keyring 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2585762 ']' 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2585762 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2585762 ']' 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2585762 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2585762 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2585762' 00:23:21.587 killing process with pid 2585762 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2585762 00:23:21.587 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2585762 00:23:21.846 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:21.847 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:21.847 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:21.847 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:21.847 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:21.847 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:21.847 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:21.847 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:21.847 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:21.847 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.847 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.847 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:24.381 00:23:24.381 real 0m44.496s 00:23:24.381 user 2m40.500s 00:23:24.381 sys 0m9.866s 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.381 ************************************ 00:23:24.381 END TEST nvmf_perf_adq 00:23:24.381 ************************************ 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:24.381 ************************************ 00:23:24.381 START TEST nvmf_shutdown 00:23:24.381 ************************************ 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:24.381 * Looking for test storage... 00:23:24.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:24.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.381 --rc genhtml_branch_coverage=1 00:23:24.381 --rc genhtml_function_coverage=1 00:23:24.381 --rc genhtml_legend=1 00:23:24.381 --rc geninfo_all_blocks=1 00:23:24.381 --rc geninfo_unexecuted_blocks=1 00:23:24.381 00:23:24.381 ' 00:23:24.381 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:24.381 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.381 --rc genhtml_branch_coverage=1 00:23:24.381 --rc genhtml_function_coverage=1 00:23:24.381 --rc genhtml_legend=1 00:23:24.381 --rc geninfo_all_blocks=1 00:23:24.381 --rc geninfo_unexecuted_blocks=1 00:23:24.382 00:23:24.382 ' 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:24.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.382 --rc genhtml_branch_coverage=1 00:23:24.382 --rc genhtml_function_coverage=1 00:23:24.382 --rc genhtml_legend=1 00:23:24.382 --rc geninfo_all_blocks=1 00:23:24.382 --rc geninfo_unexecuted_blocks=1 00:23:24.382 00:23:24.382 ' 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:24.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.382 --rc genhtml_branch_coverage=1 00:23:24.382 --rc genhtml_function_coverage=1 00:23:24.382 --rc genhtml_legend=1 00:23:24.382 --rc geninfo_all_blocks=1 00:23:24.382 --rc geninfo_unexecuted_blocks=1 00:23:24.382 00:23:24.382 ' 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.382 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:24.382 ************************************ 00:23:24.382 START TEST nvmf_shutdown_tc1 00:23:24.382 ************************************ 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.382 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.362 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.362 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:26.362 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:26.362 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:26.363 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:26.363 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:26.363 Found net devices under 0000:09:00.0: cvl_0_0 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:26.363 Found net devices under 0000:09:00.1: cvl_0_1 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:26.363 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:26.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:23:26.363 00:23:26.364 --- 10.0.0.2 ping statistics --- 00:23:26.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.364 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:23:26.364 00:23:26.364 --- 10.0.0.1 ping statistics --- 00:23:26.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.364 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2589081 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2589081 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2589081 ']' 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.364 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.364 [2024-12-09 10:33:58.686523] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:23:26.364 [2024-12-09 10:33:58.686593] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.364 [2024-12-09 10:33:58.763909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:26.640 [2024-12-09 10:33:58.827897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.640 [2024-12-09 10:33:58.827954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.640 [2024-12-09 10:33:58.827982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.640 [2024-12-09 10:33:58.827993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.640 [2024-12-09 10:33:58.828003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.640 [2024-12-09 10:33:58.829589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.640 [2024-12-09 10:33:58.829646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.640 [2024-12-09 10:33:58.829713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:26.640 [2024-12-09 10:33:58.829717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.640 [2024-12-09 10:33:58.991300] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:26.640 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.640 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:26.898 Malloc1 00:23:26.898 [2024-12-09 10:33:59.102915] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.898 Malloc2 00:23:26.898 Malloc3 00:23:26.898 Malloc4 00:23:26.898 Malloc5 00:23:26.898 Malloc6 00:23:27.155 Malloc7 00:23:27.155 Malloc8 00:23:27.155 Malloc9 00:23:27.155 Malloc10 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2589193 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2589193 /var/tmp/bdevperf.sock 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2589193 ']' 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.155 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.156 { 00:23:27.156 "params": { 00:23:27.156 "name": "Nvme$subsystem", 00:23:27.156 "trtype": "$TEST_TRANSPORT", 00:23:27.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.156 "adrfam": "ipv4", 00:23:27.156 "trsvcid": "$NVMF_PORT", 00:23:27.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.156 "hdgst": ${hdgst:-false}, 00:23:27.156 "ddgst": ${ddgst:-false} 00:23:27.156 }, 00:23:27.156 "method": "bdev_nvme_attach_controller" 00:23:27.156 } 00:23:27.156 EOF 00:23:27.156 )") 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.156 { 00:23:27.156 "params": { 00:23:27.156 "name": "Nvme$subsystem", 00:23:27.156 "trtype": "$TEST_TRANSPORT", 00:23:27.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.156 "adrfam": "ipv4", 00:23:27.156 "trsvcid": "$NVMF_PORT", 00:23:27.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.156 "hdgst": ${hdgst:-false}, 00:23:27.156 "ddgst": ${ddgst:-false} 00:23:27.156 }, 00:23:27.156 "method": "bdev_nvme_attach_controller" 00:23:27.156 } 00:23:27.156 EOF 00:23:27.156 )") 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.156 { 00:23:27.156 "params": { 00:23:27.156 "name": "Nvme$subsystem", 00:23:27.156 "trtype": "$TEST_TRANSPORT", 00:23:27.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.156 "adrfam": "ipv4", 00:23:27.156 "trsvcid": "$NVMF_PORT", 00:23:27.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.156 "hdgst": ${hdgst:-false}, 00:23:27.156 "ddgst": ${ddgst:-false} 00:23:27.156 }, 00:23:27.156 "method": "bdev_nvme_attach_controller" 00:23:27.156 } 00:23:27.156 EOF 00:23:27.156 )") 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.156 { 00:23:27.156 "params": { 00:23:27.156 "name": "Nvme$subsystem", 00:23:27.156 "trtype": "$TEST_TRANSPORT", 00:23:27.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.156 "adrfam": "ipv4", 00:23:27.156 "trsvcid": "$NVMF_PORT", 00:23:27.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.156 "hdgst": ${hdgst:-false}, 00:23:27.156 "ddgst": ${ddgst:-false} 00:23:27.156 }, 00:23:27.156 "method": "bdev_nvme_attach_controller" 00:23:27.156 } 00:23:27.156 EOF 00:23:27.156 )") 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.156 { 00:23:27.156 "params": { 00:23:27.156 "name": "Nvme$subsystem", 00:23:27.156 "trtype": "$TEST_TRANSPORT", 00:23:27.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.156 "adrfam": "ipv4", 00:23:27.156 "trsvcid": "$NVMF_PORT", 00:23:27.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.156 "hdgst": ${hdgst:-false}, 00:23:27.156 "ddgst": ${ddgst:-false} 00:23:27.156 }, 00:23:27.156 "method": "bdev_nvme_attach_controller" 00:23:27.156 } 00:23:27.156 EOF 00:23:27.156 )") 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.156 { 00:23:27.156 "params": { 00:23:27.156 "name": "Nvme$subsystem", 00:23:27.156 "trtype": "$TEST_TRANSPORT", 00:23:27.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.156 "adrfam": "ipv4", 00:23:27.156 "trsvcid": "$NVMF_PORT", 00:23:27.156 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.156 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.156 "hdgst": ${hdgst:-false}, 00:23:27.156 "ddgst": ${ddgst:-false} 00:23:27.156 }, 00:23:27.156 "method": "bdev_nvme_attach_controller" 00:23:27.156 } 00:23:27.156 EOF 00:23:27.156 )") 00:23:27.156 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.414 { 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme$subsystem", 00:23:27.414 "trtype": "$TEST_TRANSPORT", 00:23:27.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "$NVMF_PORT", 00:23:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.414 "hdgst": ${hdgst:-false}, 00:23:27.414 "ddgst": ${ddgst:-false} 00:23:27.414 }, 00:23:27.414 "method": "bdev_nvme_attach_controller" 00:23:27.414 } 00:23:27.414 EOF 00:23:27.414 )") 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.414 { 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme$subsystem", 00:23:27.414 "trtype": "$TEST_TRANSPORT", 00:23:27.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "$NVMF_PORT", 00:23:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.414 "hdgst": ${hdgst:-false}, 00:23:27.414 "ddgst": ${ddgst:-false} 00:23:27.414 }, 00:23:27.414 "method": "bdev_nvme_attach_controller" 00:23:27.414 } 00:23:27.414 EOF 00:23:27.414 )") 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.414 { 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme$subsystem", 00:23:27.414 "trtype": "$TEST_TRANSPORT", 00:23:27.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "$NVMF_PORT", 00:23:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.414 "hdgst": ${hdgst:-false}, 00:23:27.414 "ddgst": ${ddgst:-false} 00:23:27.414 }, 00:23:27.414 "method": "bdev_nvme_attach_controller" 00:23:27.414 } 00:23:27.414 EOF 00:23:27.414 )") 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.414 { 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme$subsystem", 00:23:27.414 "trtype": "$TEST_TRANSPORT", 00:23:27.414 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "$NVMF_PORT", 00:23:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.414 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.414 "hdgst": ${hdgst:-false}, 00:23:27.414 "ddgst": ${ddgst:-false} 00:23:27.414 }, 00:23:27.414 "method": "bdev_nvme_attach_controller" 00:23:27.414 } 00:23:27.414 EOF 00:23:27.414 )") 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:27.414 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme1", 00:23:27.414 "trtype": "tcp", 00:23:27.414 "traddr": "10.0.0.2", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "4420", 00:23:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.414 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.414 "hdgst": false, 00:23:27.414 "ddgst": false 00:23:27.414 }, 00:23:27.414 "method": "bdev_nvme_attach_controller" 00:23:27.414 },{ 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme2", 00:23:27.414 "trtype": "tcp", 00:23:27.414 "traddr": "10.0.0.2", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "4420", 00:23:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:27.414 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:27.414 "hdgst": false, 00:23:27.414 "ddgst": false 00:23:27.414 }, 00:23:27.414 "method": "bdev_nvme_attach_controller" 00:23:27.414 },{ 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme3", 00:23:27.414 "trtype": "tcp", 00:23:27.414 "traddr": "10.0.0.2", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "4420", 00:23:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:27.414 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:27.414 "hdgst": false, 00:23:27.414 "ddgst": false 00:23:27.414 }, 00:23:27.414 "method": "bdev_nvme_attach_controller" 00:23:27.414 },{ 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme4", 00:23:27.414 "trtype": "tcp", 00:23:27.414 "traddr": "10.0.0.2", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "4420", 00:23:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:27.414 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:27.414 "hdgst": false, 00:23:27.414 "ddgst": false 00:23:27.414 }, 00:23:27.414 "method": "bdev_nvme_attach_controller" 00:23:27.414 },{ 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme5", 00:23:27.414 "trtype": "tcp", 00:23:27.414 "traddr": "10.0.0.2", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "4420", 00:23:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:27.414 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:27.414 "hdgst": false, 00:23:27.414 "ddgst": false 00:23:27.414 }, 00:23:27.414 "method": "bdev_nvme_attach_controller" 00:23:27.414 },{ 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme6", 00:23:27.414 "trtype": "tcp", 00:23:27.414 "traddr": "10.0.0.2", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "4420", 00:23:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:27.414 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:27.414 "hdgst": false, 00:23:27.414 "ddgst": false 00:23:27.414 }, 00:23:27.414 "method": "bdev_nvme_attach_controller" 00:23:27.414 },{ 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme7", 00:23:27.414 "trtype": "tcp", 00:23:27.414 "traddr": "10.0.0.2", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "4420", 00:23:27.414 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:27.414 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:27.414 "hdgst": false, 00:23:27.414 "ddgst": false 00:23:27.414 }, 00:23:27.414 "method": "bdev_nvme_attach_controller" 00:23:27.414 },{ 00:23:27.414 "params": { 00:23:27.414 "name": "Nvme8", 00:23:27.414 "trtype": "tcp", 00:23:27.414 "traddr": "10.0.0.2", 00:23:27.414 "adrfam": "ipv4", 00:23:27.414 "trsvcid": "4420", 00:23:27.415 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:27.415 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:27.415 "hdgst": false, 00:23:27.415 "ddgst": false 00:23:27.415 }, 00:23:27.415 "method": "bdev_nvme_attach_controller" 00:23:27.415 },{ 00:23:27.415 "params": { 00:23:27.415 "name": "Nvme9", 00:23:27.415 "trtype": "tcp", 00:23:27.415 "traddr": "10.0.0.2", 00:23:27.415 "adrfam": "ipv4", 00:23:27.415 "trsvcid": "4420", 00:23:27.415 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:27.415 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:27.415 "hdgst": false, 00:23:27.415 "ddgst": false 00:23:27.415 }, 00:23:27.415 "method": "bdev_nvme_attach_controller" 00:23:27.415 },{ 00:23:27.415 "params": { 00:23:27.415 "name": "Nvme10", 00:23:27.415 "trtype": "tcp", 00:23:27.415 "traddr": "10.0.0.2", 00:23:27.415 "adrfam": "ipv4", 00:23:27.415 "trsvcid": "4420", 00:23:27.415 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:27.415 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:27.415 "hdgst": false, 00:23:27.415 "ddgst": false 00:23:27.415 }, 00:23:27.415 "method": "bdev_nvme_attach_controller" 00:23:27.415 }' 00:23:27.415 [2024-12-09 10:33:59.623677] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:23:27.415 [2024-12-09 10:33:59.623753] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:27.415 [2024-12-09 10:33:59.696075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.415 [2024-12-09 10:33:59.755349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.369 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.369 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:29.369 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:29.369 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.369 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:29.370 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.370 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2589193 00:23:29.370 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:29.370 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:30.299 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2589193 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2589081 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.299 { 00:23:30.299 "params": { 00:23:30.299 "name": "Nvme$subsystem", 00:23:30.299 "trtype": "$TEST_TRANSPORT", 00:23:30.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.299 "adrfam": "ipv4", 00:23:30.299 "trsvcid": "$NVMF_PORT", 00:23:30.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.299 "hdgst": ${hdgst:-false}, 00:23:30.299 "ddgst": ${ddgst:-false} 00:23:30.299 }, 00:23:30.299 "method": "bdev_nvme_attach_controller" 00:23:30.299 } 00:23:30.299 EOF 00:23:30.299 )") 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.299 { 00:23:30.299 "params": { 00:23:30.299 "name": "Nvme$subsystem", 00:23:30.299 "trtype": "$TEST_TRANSPORT", 00:23:30.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.299 "adrfam": "ipv4", 00:23:30.299 "trsvcid": "$NVMF_PORT", 00:23:30.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.299 "hdgst": ${hdgst:-false}, 00:23:30.299 "ddgst": ${ddgst:-false} 00:23:30.299 }, 00:23:30.299 "method": "bdev_nvme_attach_controller" 00:23:30.299 } 00:23:30.299 EOF 00:23:30.299 )") 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.299 { 00:23:30.299 "params": { 00:23:30.299 "name": "Nvme$subsystem", 00:23:30.299 "trtype": "$TEST_TRANSPORT", 00:23:30.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.299 "adrfam": "ipv4", 00:23:30.299 "trsvcid": "$NVMF_PORT", 00:23:30.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.299 "hdgst": ${hdgst:-false}, 00:23:30.299 "ddgst": ${ddgst:-false} 00:23:30.299 }, 00:23:30.299 "method": "bdev_nvme_attach_controller" 00:23:30.299 } 00:23:30.299 EOF 00:23:30.299 )") 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.299 { 00:23:30.299 "params": { 00:23:30.299 "name": "Nvme$subsystem", 00:23:30.299 "trtype": "$TEST_TRANSPORT", 00:23:30.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.299 "adrfam": "ipv4", 00:23:30.299 "trsvcid": "$NVMF_PORT", 00:23:30.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.299 "hdgst": ${hdgst:-false}, 00:23:30.299 "ddgst": ${ddgst:-false} 00:23:30.299 }, 00:23:30.299 "method": "bdev_nvme_attach_controller" 00:23:30.299 } 00:23:30.299 EOF 00:23:30.299 )") 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.299 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.299 { 00:23:30.299 "params": { 00:23:30.299 "name": "Nvme$subsystem", 00:23:30.299 "trtype": "$TEST_TRANSPORT", 00:23:30.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.299 "adrfam": "ipv4", 00:23:30.299 "trsvcid": "$NVMF_PORT", 00:23:30.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.299 "hdgst": ${hdgst:-false}, 00:23:30.299 "ddgst": ${ddgst:-false} 00:23:30.299 }, 00:23:30.299 "method": "bdev_nvme_attach_controller" 00:23:30.299 } 00:23:30.299 EOF 00:23:30.299 )") 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.300 { 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme$subsystem", 00:23:30.300 "trtype": "$TEST_TRANSPORT", 00:23:30.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "$NVMF_PORT", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.300 "hdgst": ${hdgst:-false}, 00:23:30.300 "ddgst": ${ddgst:-false} 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 } 00:23:30.300 EOF 00:23:30.300 )") 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.300 { 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme$subsystem", 00:23:30.300 "trtype": "$TEST_TRANSPORT", 00:23:30.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "$NVMF_PORT", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.300 "hdgst": ${hdgst:-false}, 00:23:30.300 "ddgst": ${ddgst:-false} 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 } 00:23:30.300 EOF 00:23:30.300 )") 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.300 { 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme$subsystem", 00:23:30.300 "trtype": "$TEST_TRANSPORT", 00:23:30.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "$NVMF_PORT", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.300 "hdgst": ${hdgst:-false}, 00:23:30.300 "ddgst": ${ddgst:-false} 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 } 00:23:30.300 EOF 00:23:30.300 )") 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.300 { 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme$subsystem", 00:23:30.300 "trtype": "$TEST_TRANSPORT", 00:23:30.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "$NVMF_PORT", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.300 "hdgst": ${hdgst:-false}, 00:23:30.300 "ddgst": ${ddgst:-false} 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 } 00:23:30.300 EOF 00:23:30.300 )") 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:30.300 { 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme$subsystem", 00:23:30.300 "trtype": "$TEST_TRANSPORT", 00:23:30.300 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "$NVMF_PORT", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:30.300 "hdgst": ${hdgst:-false}, 00:23:30.300 "ddgst": ${ddgst:-false} 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 } 00:23:30.300 EOF 00:23:30.300 )") 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:30.300 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme1", 00:23:30.300 "trtype": "tcp", 00:23:30.300 "traddr": "10.0.0.2", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "4420", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:30.300 "hdgst": false, 00:23:30.300 "ddgst": false 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 },{ 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme2", 00:23:30.300 "trtype": "tcp", 00:23:30.300 "traddr": "10.0.0.2", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "4420", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:30.300 "hdgst": false, 00:23:30.300 "ddgst": false 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 },{ 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme3", 00:23:30.300 "trtype": "tcp", 00:23:30.300 "traddr": "10.0.0.2", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "4420", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:30.300 "hdgst": false, 00:23:30.300 "ddgst": false 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 },{ 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme4", 00:23:30.300 "trtype": "tcp", 00:23:30.300 "traddr": "10.0.0.2", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "4420", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:30.300 "hdgst": false, 00:23:30.300 "ddgst": false 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 },{ 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme5", 00:23:30.300 "trtype": "tcp", 00:23:30.300 "traddr": "10.0.0.2", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "4420", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:30.300 "hdgst": false, 00:23:30.300 "ddgst": false 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 },{ 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme6", 00:23:30.300 "trtype": "tcp", 00:23:30.300 "traddr": "10.0.0.2", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "4420", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:30.300 "hdgst": false, 00:23:30.300 "ddgst": false 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 },{ 00:23:30.300 "params": { 00:23:30.300 "name": "Nvme7", 00:23:30.300 "trtype": "tcp", 00:23:30.300 "traddr": "10.0.0.2", 00:23:30.300 "adrfam": "ipv4", 00:23:30.300 "trsvcid": "4420", 00:23:30.300 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:30.300 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:30.300 "hdgst": false, 00:23:30.300 "ddgst": false 00:23:30.300 }, 00:23:30.300 "method": "bdev_nvme_attach_controller" 00:23:30.300 },{ 00:23:30.300 "params": { 00:23:30.301 "name": "Nvme8", 00:23:30.301 "trtype": "tcp", 00:23:30.301 "traddr": "10.0.0.2", 00:23:30.301 "adrfam": "ipv4", 00:23:30.301 "trsvcid": "4420", 00:23:30.301 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:30.301 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:30.301 "hdgst": false, 00:23:30.301 "ddgst": false 00:23:30.301 }, 00:23:30.301 "method": "bdev_nvme_attach_controller" 00:23:30.301 },{ 00:23:30.301 "params": { 00:23:30.301 "name": "Nvme9", 00:23:30.301 "trtype": "tcp", 00:23:30.301 "traddr": "10.0.0.2", 00:23:30.301 "adrfam": "ipv4", 00:23:30.301 "trsvcid": "4420", 00:23:30.301 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:30.301 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:30.301 "hdgst": false, 00:23:30.301 "ddgst": false 00:23:30.301 }, 00:23:30.301 "method": "bdev_nvme_attach_controller" 00:23:30.301 },{ 00:23:30.301 "params": { 00:23:30.301 "name": "Nvme10", 00:23:30.301 "trtype": "tcp", 00:23:30.301 "traddr": "10.0.0.2", 00:23:30.301 "adrfam": "ipv4", 00:23:30.301 "trsvcid": "4420", 00:23:30.301 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:30.301 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:30.301 "hdgst": false, 00:23:30.301 "ddgst": false 00:23:30.301 }, 00:23:30.301 "method": "bdev_nvme_attach_controller" 00:23:30.301 }' 00:23:30.301 [2024-12-09 10:34:02.714015] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:23:30.301 [2024-12-09 10:34:02.714104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2589568 ] 00:23:30.557 [2024-12-09 10:34:02.787623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.557 [2024-12-09 10:34:02.850336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:31.931 Running I/O for 1 seconds... 00:23:33.124 1748.00 IOPS, 109.25 MiB/s 00:23:33.124 Latency(us) 00:23:33.124 [2024-12-09T09:34:05.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.124 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.124 Verification LBA range: start 0x0 length 0x400 00:23:33.124 Nvme1n1 : 1.15 225.19 14.07 0.00 0.00 280521.34 4199.16 260978.92 00:23:33.124 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.124 Verification LBA range: start 0x0 length 0x400 00:23:33.124 Nvme2n1 : 1.11 229.62 14.35 0.00 0.00 271349.76 18447.17 260978.92 00:23:33.124 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.124 Verification LBA range: start 0x0 length 0x400 00:23:33.124 Nvme3n1 : 1.09 238.75 14.92 0.00 0.00 250700.17 11505.21 251658.24 00:23:33.124 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.124 Verification LBA range: start 0x0 length 0x400 00:23:33.124 Nvme4n1 : 1.10 233.13 14.57 0.00 0.00 257986.94 18350.08 253211.69 00:23:33.124 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.124 Verification LBA range: start 0x0 length 0x400 00:23:33.124 Nvme5n1 : 1.18 217.16 13.57 0.00 0.00 269036.66 20194.80 273406.48 00:23:33.124 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.124 Verification LBA range: start 0x0 length 0x400 00:23:33.124 Nvme6n1 : 1.12 228.48 14.28 0.00 0.00 254444.47 21845.33 260978.92 00:23:33.124 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.124 Verification LBA range: start 0x0 length 0x400 00:23:33.124 Nvme7n1 : 1.15 225.76 14.11 0.00 0.00 252816.70 6019.60 259425.47 00:23:33.124 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.124 Verification LBA range: start 0x0 length 0x400 00:23:33.124 Nvme8n1 : 1.19 268.15 16.76 0.00 0.00 210495.72 11311.03 237677.23 00:23:33.124 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.124 Verification LBA range: start 0x0 length 0x400 00:23:33.124 Nvme9n1 : 1.19 218.62 13.66 0.00 0.00 254063.28 3094.76 299815.06 00:23:33.124 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:33.124 Verification LBA range: start 0x0 length 0x400 00:23:33.124 Nvme10n1 : 1.20 266.66 16.67 0.00 0.00 205047.39 5752.60 271853.04 00:23:33.124 [2024-12-09T09:34:05.565Z] =================================================================================================================== 00:23:33.124 [2024-12-09T09:34:05.565Z] Total : 2351.53 146.97 0.00 0.00 248669.96 3094.76 299815.06 00:23:33.381 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:33.381 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:33.381 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:33.381 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:33.382 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:33.382 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.382 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:33.382 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.382 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:33.382 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.382 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.382 rmmod nvme_tcp 00:23:33.382 rmmod nvme_fabrics 00:23:33.639 rmmod nvme_keyring 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2589081 ']' 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2589081 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2589081 ']' 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2589081 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2589081 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2589081' 00:23:33.639 killing process with pid 2589081 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2589081 00:23:33.639 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2589081 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.207 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:36.118 00:23:36.118 real 0m12.039s 00:23:36.118 user 0m35.185s 00:23:36.118 sys 0m3.259s 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:36.118 ************************************ 00:23:36.118 END TEST nvmf_shutdown_tc1 00:23:36.118 ************************************ 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:36.118 ************************************ 00:23:36.118 START TEST nvmf_shutdown_tc2 00:23:36.118 ************************************ 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.118 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:36.376 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:36.377 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:36.377 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:36.377 Found net devices under 0000:09:00.0: cvl_0_0 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:36.377 Found net devices under 0000:09:00.1: cvl_0_1 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:36.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:36.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.291 ms 00:23:36.377 00:23:36.377 --- 10.0.0.2 ping statistics --- 00:23:36.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.377 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:36.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:36.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:23:36.377 00:23:36.377 --- 10.0.0.1 ping statistics --- 00:23:36.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:36.377 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2590413 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2590413 00:23:36.377 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2590413 ']' 00:23:36.378 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.378 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:36.378 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.378 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:36.378 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.378 [2024-12-09 10:34:08.792029] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:23:36.378 [2024-12-09 10:34:08.792103] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.635 [2024-12-09 10:34:08.867979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:36.635 [2024-12-09 10:34:08.927999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.635 [2024-12-09 10:34:08.928061] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.635 [2024-12-09 10:34:08.928090] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.635 [2024-12-09 10:34:08.928102] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.635 [2024-12-09 10:34:08.928111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.635 [2024-12-09 10:34:08.929756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.635 [2024-12-09 10:34:08.929818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:36.635 [2024-12-09 10:34:08.929840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:36.635 [2024-12-09 10:34:08.929843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:36.635 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.635 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:36.635 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:36.635 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:36.635 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.635 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.635 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:36.635 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.635 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.893 [2024-12-09 10:34:09.080708] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.893 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:36.893 Malloc1 00:23:36.893 [2024-12-09 10:34:09.187055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.893 Malloc2 00:23:36.893 Malloc3 00:23:36.893 Malloc4 00:23:37.150 Malloc5 00:23:37.150 Malloc6 00:23:37.150 Malloc7 00:23:37.150 Malloc8 00:23:37.150 Malloc9 00:23:37.407 Malloc10 00:23:37.407 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2590513 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2590513 /var/tmp/bdevperf.sock 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2590513 ']' 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.408 { 00:23:37.408 "params": { 00:23:37.408 "name": "Nvme$subsystem", 00:23:37.408 "trtype": "$TEST_TRANSPORT", 00:23:37.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.408 "adrfam": "ipv4", 00:23:37.408 "trsvcid": "$NVMF_PORT", 00:23:37.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.408 "hdgst": ${hdgst:-false}, 00:23:37.408 "ddgst": ${ddgst:-false} 00:23:37.408 }, 00:23:37.408 "method": "bdev_nvme_attach_controller" 00:23:37.408 } 00:23:37.408 EOF 00:23:37.408 )") 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.408 { 00:23:37.408 "params": { 00:23:37.408 "name": "Nvme$subsystem", 00:23:37.408 "trtype": "$TEST_TRANSPORT", 00:23:37.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.408 "adrfam": "ipv4", 00:23:37.408 "trsvcid": "$NVMF_PORT", 00:23:37.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.408 "hdgst": ${hdgst:-false}, 00:23:37.408 "ddgst": ${ddgst:-false} 00:23:37.408 }, 00:23:37.408 "method": "bdev_nvme_attach_controller" 00:23:37.408 } 00:23:37.408 EOF 00:23:37.408 )") 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.408 { 00:23:37.408 "params": { 00:23:37.408 "name": "Nvme$subsystem", 00:23:37.408 "trtype": "$TEST_TRANSPORT", 00:23:37.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.408 "adrfam": "ipv4", 00:23:37.408 "trsvcid": "$NVMF_PORT", 00:23:37.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.408 "hdgst": ${hdgst:-false}, 00:23:37.408 "ddgst": ${ddgst:-false} 00:23:37.408 }, 00:23:37.408 "method": "bdev_nvme_attach_controller" 00:23:37.408 } 00:23:37.408 EOF 00:23:37.408 )") 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.408 { 00:23:37.408 "params": { 00:23:37.408 "name": "Nvme$subsystem", 00:23:37.408 "trtype": "$TEST_TRANSPORT", 00:23:37.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.408 "adrfam": "ipv4", 00:23:37.408 "trsvcid": "$NVMF_PORT", 00:23:37.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.408 "hdgst": ${hdgst:-false}, 00:23:37.408 "ddgst": ${ddgst:-false} 00:23:37.408 }, 00:23:37.408 "method": "bdev_nvme_attach_controller" 00:23:37.408 } 00:23:37.408 EOF 00:23:37.408 )") 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.408 { 00:23:37.408 "params": { 00:23:37.408 "name": "Nvme$subsystem", 00:23:37.408 "trtype": "$TEST_TRANSPORT", 00:23:37.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.408 "adrfam": "ipv4", 00:23:37.408 "trsvcid": "$NVMF_PORT", 00:23:37.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.408 "hdgst": ${hdgst:-false}, 00:23:37.408 "ddgst": ${ddgst:-false} 00:23:37.408 }, 00:23:37.408 "method": "bdev_nvme_attach_controller" 00:23:37.408 } 00:23:37.408 EOF 00:23:37.408 )") 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.408 { 00:23:37.408 "params": { 00:23:37.408 "name": "Nvme$subsystem", 00:23:37.408 "trtype": "$TEST_TRANSPORT", 00:23:37.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.408 "adrfam": "ipv4", 00:23:37.408 "trsvcid": "$NVMF_PORT", 00:23:37.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.408 "hdgst": ${hdgst:-false}, 00:23:37.408 "ddgst": ${ddgst:-false} 00:23:37.408 }, 00:23:37.408 "method": "bdev_nvme_attach_controller" 00:23:37.408 } 00:23:37.408 EOF 00:23:37.408 )") 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.408 { 00:23:37.408 "params": { 00:23:37.408 "name": "Nvme$subsystem", 00:23:37.408 "trtype": "$TEST_TRANSPORT", 00:23:37.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.408 "adrfam": "ipv4", 00:23:37.408 "trsvcid": "$NVMF_PORT", 00:23:37.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.408 "hdgst": ${hdgst:-false}, 00:23:37.408 "ddgst": ${ddgst:-false} 00:23:37.408 }, 00:23:37.408 "method": "bdev_nvme_attach_controller" 00:23:37.408 } 00:23:37.408 EOF 00:23:37.408 )") 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.408 { 00:23:37.408 "params": { 00:23:37.408 "name": "Nvme$subsystem", 00:23:37.408 "trtype": "$TEST_TRANSPORT", 00:23:37.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.408 "adrfam": "ipv4", 00:23:37.408 "trsvcid": "$NVMF_PORT", 00:23:37.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.408 "hdgst": ${hdgst:-false}, 00:23:37.408 "ddgst": ${ddgst:-false} 00:23:37.408 }, 00:23:37.408 "method": "bdev_nvme_attach_controller" 00:23:37.408 } 00:23:37.408 EOF 00:23:37.408 )") 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.408 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.408 { 00:23:37.408 "params": { 00:23:37.408 "name": "Nvme$subsystem", 00:23:37.408 "trtype": "$TEST_TRANSPORT", 00:23:37.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.408 "adrfam": "ipv4", 00:23:37.408 "trsvcid": "$NVMF_PORT", 00:23:37.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.408 "hdgst": ${hdgst:-false}, 00:23:37.408 "ddgst": ${ddgst:-false} 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 } 00:23:37.409 EOF 00:23:37.409 )") 00:23:37.409 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:37.409 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:37.409 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:37.409 { 00:23:37.409 "params": { 00:23:37.409 "name": "Nvme$subsystem", 00:23:37.409 "trtype": "$TEST_TRANSPORT", 00:23:37.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:37.409 "adrfam": "ipv4", 00:23:37.409 "trsvcid": "$NVMF_PORT", 00:23:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:37.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:37.409 "hdgst": ${hdgst:-false}, 00:23:37.409 "ddgst": ${ddgst:-false} 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 } 00:23:37.409 EOF 00:23:37.409 )") 00:23:37.409 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:37.409 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:37.409 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:37.409 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:37.409 "params": { 00:23:37.409 "name": "Nvme1", 00:23:37.409 "trtype": "tcp", 00:23:37.409 "traddr": "10.0.0.2", 00:23:37.409 "adrfam": "ipv4", 00:23:37.409 "trsvcid": "4420", 00:23:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.409 "hdgst": false, 00:23:37.409 "ddgst": false 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 },{ 00:23:37.409 "params": { 00:23:37.409 "name": "Nvme2", 00:23:37.409 "trtype": "tcp", 00:23:37.409 "traddr": "10.0.0.2", 00:23:37.409 "adrfam": "ipv4", 00:23:37.409 "trsvcid": "4420", 00:23:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:37.409 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:37.409 "hdgst": false, 00:23:37.409 "ddgst": false 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 },{ 00:23:37.409 "params": { 00:23:37.409 "name": "Nvme3", 00:23:37.409 "trtype": "tcp", 00:23:37.409 "traddr": "10.0.0.2", 00:23:37.409 "adrfam": "ipv4", 00:23:37.409 "trsvcid": "4420", 00:23:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:37.409 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:37.409 "hdgst": false, 00:23:37.409 "ddgst": false 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 },{ 00:23:37.409 "params": { 00:23:37.409 "name": "Nvme4", 00:23:37.409 "trtype": "tcp", 00:23:37.409 "traddr": "10.0.0.2", 00:23:37.409 "adrfam": "ipv4", 00:23:37.409 "trsvcid": "4420", 00:23:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:37.409 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:37.409 "hdgst": false, 00:23:37.409 "ddgst": false 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 },{ 00:23:37.409 "params": { 00:23:37.409 "name": "Nvme5", 00:23:37.409 "trtype": "tcp", 00:23:37.409 "traddr": "10.0.0.2", 00:23:37.409 "adrfam": "ipv4", 00:23:37.409 "trsvcid": "4420", 00:23:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:37.409 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:37.409 "hdgst": false, 00:23:37.409 "ddgst": false 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 },{ 00:23:37.409 "params": { 00:23:37.409 "name": "Nvme6", 00:23:37.409 "trtype": "tcp", 00:23:37.409 "traddr": "10.0.0.2", 00:23:37.409 "adrfam": "ipv4", 00:23:37.409 "trsvcid": "4420", 00:23:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:37.409 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:37.409 "hdgst": false, 00:23:37.409 "ddgst": false 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 },{ 00:23:37.409 "params": { 00:23:37.409 "name": "Nvme7", 00:23:37.409 "trtype": "tcp", 00:23:37.409 "traddr": "10.0.0.2", 00:23:37.409 "adrfam": "ipv4", 00:23:37.409 "trsvcid": "4420", 00:23:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:37.409 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:37.409 "hdgst": false, 00:23:37.409 "ddgst": false 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 },{ 00:23:37.409 "params": { 00:23:37.409 "name": "Nvme8", 00:23:37.409 "trtype": "tcp", 00:23:37.409 "traddr": "10.0.0.2", 00:23:37.409 "adrfam": "ipv4", 00:23:37.409 "trsvcid": "4420", 00:23:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:37.409 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:37.409 "hdgst": false, 00:23:37.409 "ddgst": false 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 },{ 00:23:37.409 "params": { 00:23:37.409 "name": "Nvme9", 00:23:37.409 "trtype": "tcp", 00:23:37.409 "traddr": "10.0.0.2", 00:23:37.409 "adrfam": "ipv4", 00:23:37.409 "trsvcid": "4420", 00:23:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:37.409 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:37.409 "hdgst": false, 00:23:37.409 "ddgst": false 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 },{ 00:23:37.409 "params": { 00:23:37.409 "name": "Nvme10", 00:23:37.409 "trtype": "tcp", 00:23:37.409 "traddr": "10.0.0.2", 00:23:37.409 "adrfam": "ipv4", 00:23:37.409 "trsvcid": "4420", 00:23:37.409 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:37.409 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:37.409 "hdgst": false, 00:23:37.409 "ddgst": false 00:23:37.409 }, 00:23:37.409 "method": "bdev_nvme_attach_controller" 00:23:37.409 }' 00:23:37.409 [2024-12-09 10:34:09.718564] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:23:37.409 [2024-12-09 10:34:09.718641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590513 ] 00:23:37.409 [2024-12-09 10:34:09.789101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.666 [2024-12-09 10:34:09.850699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:39.561 Running I/O for 10 seconds... 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.561 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:39.820 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.820 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:39.820 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:39.820 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:40.078 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:40.078 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:40.078 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:40.078 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:40.078 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.078 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.078 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.078 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:40.078 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:40.078 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2590513 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2590513 ']' 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2590513 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2590513 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2590513' 00:23:40.336 killing process with pid 2590513 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2590513 00:23:40.336 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2590513 00:23:40.336 Received shutdown signal, test time was about 0.933061 seconds 00:23:40.336 00:23:40.336 Latency(us) 00:23:40.336 [2024-12-09T09:34:12.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.336 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.336 Verification LBA range: start 0x0 length 0x400 00:23:40.336 Nvme1n1 : 0.87 219.56 13.72 0.00 0.00 287938.18 19903.53 253211.69 00:23:40.336 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.336 Verification LBA range: start 0x0 length 0x400 00:23:40.336 Nvme2n1 : 0.90 212.67 13.29 0.00 0.00 291238.24 20971.52 264085.81 00:23:40.336 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.336 Verification LBA range: start 0x0 length 0x400 00:23:40.336 Nvme3n1 : 0.93 276.70 17.29 0.00 0.00 219386.50 21262.79 256318.58 00:23:40.336 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.336 Verification LBA range: start 0x0 length 0x400 00:23:40.336 Nvme4n1 : 0.93 275.79 17.24 0.00 0.00 215410.16 17282.09 259425.47 00:23:40.336 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.336 Verification LBA range: start 0x0 length 0x400 00:23:40.336 Nvme5n1 : 0.91 210.27 13.14 0.00 0.00 275485.84 36894.34 236123.78 00:23:40.336 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.336 Verification LBA range: start 0x0 length 0x400 00:23:40.336 Nvme6n1 : 0.91 210.56 13.16 0.00 0.00 269958.26 21554.06 253211.69 00:23:40.336 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.336 Verification LBA range: start 0x0 length 0x400 00:23:40.336 Nvme7n1 : 0.93 274.61 17.16 0.00 0.00 202209.75 10000.31 237677.23 00:23:40.336 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.336 Verification LBA range: start 0x0 length 0x400 00:23:40.336 Nvme8n1 : 0.89 215.18 13.45 0.00 0.00 251492.88 19320.98 254765.13 00:23:40.336 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.336 Verification LBA range: start 0x0 length 0x400 00:23:40.336 Nvme9n1 : 0.92 209.24 13.08 0.00 0.00 254245.29 22136.60 270299.59 00:23:40.336 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:40.336 Verification LBA range: start 0x0 length 0x400 00:23:40.336 Nvme10n1 : 0.92 208.32 13.02 0.00 0.00 249628.19 22427.88 284280.60 00:23:40.336 [2024-12-09T09:34:12.777Z] =================================================================================================================== 00:23:40.336 [2024-12-09T09:34:12.777Z] Total : 2312.91 144.56 0.00 0.00 248120.80 10000.31 284280.60 00:23:40.903 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:41.834 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2590413 00:23:41.834 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:41.834 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:41.834 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:41.834 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:41.834 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:41.834 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.834 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:41.834 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.834 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.835 rmmod nvme_tcp 00:23:41.835 rmmod nvme_fabrics 00:23:41.835 rmmod nvme_keyring 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2590413 ']' 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2590413 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2590413 ']' 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2590413 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2590413 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2590413' 00:23:41.835 killing process with pid 2590413 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2590413 00:23:41.835 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2590413 00:23:42.401 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:42.401 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:42.401 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:42.401 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:42.401 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:42.401 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:42.401 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:42.402 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:42.402 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:42.402 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.402 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.402 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.936 00:23:44.936 real 0m8.225s 00:23:44.936 user 0m25.807s 00:23:44.936 sys 0m1.608s 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:44.936 ************************************ 00:23:44.936 END TEST nvmf_shutdown_tc2 00:23:44.936 ************************************ 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:44.936 ************************************ 00:23:44.936 START TEST nvmf_shutdown_tc3 00:23:44.936 ************************************ 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.936 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:44.937 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:44.937 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:44.937 Found net devices under 0000:09:00.0: cvl_0_0 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:44.937 Found net devices under 0000:09:00.1: cvl_0_1 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.937 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:23:44.938 00:23:44.938 --- 10.0.0.2 ping statistics --- 00:23:44.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.938 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:23:44.938 00:23:44.938 --- 10.0.0.1 ping statistics --- 00:23:44.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.938 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.938 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2591551 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2591551 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2591551 ']' 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.938 [2024-12-09 10:34:17.067534] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:23:44.938 [2024-12-09 10:34:17.067620] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.938 [2024-12-09 10:34:17.138763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:44.938 [2024-12-09 10:34:17.196539] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.938 [2024-12-09 10:34:17.196591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.938 [2024-12-09 10:34:17.196618] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.938 [2024-12-09 10:34:17.196630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.938 [2024-12-09 10:34:17.196639] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.938 [2024-12-09 10:34:17.198084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:44.938 [2024-12-09 10:34:17.198156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:44.938 [2024-12-09 10:34:17.198215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:44.938 [2024-12-09 10:34:17.198218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.938 [2024-12-09 10:34:17.353938] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:44.938 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:45.196 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.197 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.197 Malloc1 00:23:45.197 [2024-12-09 10:34:17.459023] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.197 Malloc2 00:23:45.197 Malloc3 00:23:45.197 Malloc4 00:23:45.197 Malloc5 00:23:45.453 Malloc6 00:23:45.453 Malloc7 00:23:45.453 Malloc8 00:23:45.453 Malloc9 00:23:45.453 Malloc10 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2591727 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2591727 /var/tmp/bdevperf.sock 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2591727 ']' 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.712 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.713 { 00:23:45.713 "params": { 00:23:45.713 "name": "Nvme$subsystem", 00:23:45.713 "trtype": "$TEST_TRANSPORT", 00:23:45.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.713 "adrfam": "ipv4", 00:23:45.713 "trsvcid": "$NVMF_PORT", 00:23:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.713 "hdgst": ${hdgst:-false}, 00:23:45.713 "ddgst": ${ddgst:-false} 00:23:45.713 }, 00:23:45.713 "method": "bdev_nvme_attach_controller" 00:23:45.713 } 00:23:45.713 EOF 00:23:45.713 )") 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.713 { 00:23:45.713 "params": { 00:23:45.713 "name": "Nvme$subsystem", 00:23:45.713 "trtype": "$TEST_TRANSPORT", 00:23:45.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.713 "adrfam": "ipv4", 00:23:45.713 "trsvcid": "$NVMF_PORT", 00:23:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.713 "hdgst": ${hdgst:-false}, 00:23:45.713 "ddgst": ${ddgst:-false} 00:23:45.713 }, 00:23:45.713 "method": "bdev_nvme_attach_controller" 00:23:45.713 } 00:23:45.713 EOF 00:23:45.713 )") 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.713 { 00:23:45.713 "params": { 00:23:45.713 "name": "Nvme$subsystem", 00:23:45.713 "trtype": "$TEST_TRANSPORT", 00:23:45.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.713 "adrfam": "ipv4", 00:23:45.713 "trsvcid": "$NVMF_PORT", 00:23:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.713 "hdgst": ${hdgst:-false}, 00:23:45.713 "ddgst": ${ddgst:-false} 00:23:45.713 }, 00:23:45.713 "method": "bdev_nvme_attach_controller" 00:23:45.713 } 00:23:45.713 EOF 00:23:45.713 )") 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.713 { 00:23:45.713 "params": { 00:23:45.713 "name": "Nvme$subsystem", 00:23:45.713 "trtype": "$TEST_TRANSPORT", 00:23:45.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.713 "adrfam": "ipv4", 00:23:45.713 "trsvcid": "$NVMF_PORT", 00:23:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.713 "hdgst": ${hdgst:-false}, 00:23:45.713 "ddgst": ${ddgst:-false} 00:23:45.713 }, 00:23:45.713 "method": "bdev_nvme_attach_controller" 00:23:45.713 } 00:23:45.713 EOF 00:23:45.713 )") 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.713 { 00:23:45.713 "params": { 00:23:45.713 "name": "Nvme$subsystem", 00:23:45.713 "trtype": "$TEST_TRANSPORT", 00:23:45.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.713 "adrfam": "ipv4", 00:23:45.713 "trsvcid": "$NVMF_PORT", 00:23:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.713 "hdgst": ${hdgst:-false}, 00:23:45.713 "ddgst": ${ddgst:-false} 00:23:45.713 }, 00:23:45.713 "method": "bdev_nvme_attach_controller" 00:23:45.713 } 00:23:45.713 EOF 00:23:45.713 )") 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.713 { 00:23:45.713 "params": { 00:23:45.713 "name": "Nvme$subsystem", 00:23:45.713 "trtype": "$TEST_TRANSPORT", 00:23:45.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.713 "adrfam": "ipv4", 00:23:45.713 "trsvcid": "$NVMF_PORT", 00:23:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.713 "hdgst": ${hdgst:-false}, 00:23:45.713 "ddgst": ${ddgst:-false} 00:23:45.713 }, 00:23:45.713 "method": "bdev_nvme_attach_controller" 00:23:45.713 } 00:23:45.713 EOF 00:23:45.713 )") 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.713 { 00:23:45.713 "params": { 00:23:45.713 "name": "Nvme$subsystem", 00:23:45.713 "trtype": "$TEST_TRANSPORT", 00:23:45.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.713 "adrfam": "ipv4", 00:23:45.713 "trsvcid": "$NVMF_PORT", 00:23:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.713 "hdgst": ${hdgst:-false}, 00:23:45.713 "ddgst": ${ddgst:-false} 00:23:45.713 }, 00:23:45.713 "method": "bdev_nvme_attach_controller" 00:23:45.713 } 00:23:45.713 EOF 00:23:45.713 )") 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.713 { 00:23:45.713 "params": { 00:23:45.713 "name": "Nvme$subsystem", 00:23:45.713 "trtype": "$TEST_TRANSPORT", 00:23:45.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.713 "adrfam": "ipv4", 00:23:45.713 "trsvcid": "$NVMF_PORT", 00:23:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.713 "hdgst": ${hdgst:-false}, 00:23:45.713 "ddgst": ${ddgst:-false} 00:23:45.713 }, 00:23:45.713 "method": "bdev_nvme_attach_controller" 00:23:45.713 } 00:23:45.713 EOF 00:23:45.713 )") 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.713 { 00:23:45.713 "params": { 00:23:45.713 "name": "Nvme$subsystem", 00:23:45.713 "trtype": "$TEST_TRANSPORT", 00:23:45.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.713 "adrfam": "ipv4", 00:23:45.713 "trsvcid": "$NVMF_PORT", 00:23:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.713 "hdgst": ${hdgst:-false}, 00:23:45.713 "ddgst": ${ddgst:-false} 00:23:45.713 }, 00:23:45.713 "method": "bdev_nvme_attach_controller" 00:23:45.713 } 00:23:45.713 EOF 00:23:45.713 )") 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:45.713 { 00:23:45.713 "params": { 00:23:45.713 "name": "Nvme$subsystem", 00:23:45.713 "trtype": "$TEST_TRANSPORT", 00:23:45.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:45.713 "adrfam": "ipv4", 00:23:45.713 "trsvcid": "$NVMF_PORT", 00:23:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:45.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:45.713 "hdgst": ${hdgst:-false}, 00:23:45.713 "ddgst": ${ddgst:-false} 00:23:45.713 }, 00:23:45.713 "method": "bdev_nvme_attach_controller" 00:23:45.713 } 00:23:45.713 EOF 00:23:45.713 )") 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:45.713 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:45.713 "params": { 00:23:45.713 "name": "Nvme1", 00:23:45.713 "trtype": "tcp", 00:23:45.713 "traddr": "10.0.0.2", 00:23:45.713 "adrfam": "ipv4", 00:23:45.713 "trsvcid": "4420", 00:23:45.713 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.713 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:45.713 "hdgst": false, 00:23:45.713 "ddgst": false 00:23:45.713 }, 00:23:45.713 "method": "bdev_nvme_attach_controller" 00:23:45.714 },{ 00:23:45.714 "params": { 00:23:45.714 "name": "Nvme2", 00:23:45.714 "trtype": "tcp", 00:23:45.714 "traddr": "10.0.0.2", 00:23:45.714 "adrfam": "ipv4", 00:23:45.714 "trsvcid": "4420", 00:23:45.714 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:45.714 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:45.714 "hdgst": false, 00:23:45.714 "ddgst": false 00:23:45.714 }, 00:23:45.714 "method": "bdev_nvme_attach_controller" 00:23:45.714 },{ 00:23:45.714 "params": { 00:23:45.714 "name": "Nvme3", 00:23:45.714 "trtype": "tcp", 00:23:45.714 "traddr": "10.0.0.2", 00:23:45.714 "adrfam": "ipv4", 00:23:45.714 "trsvcid": "4420", 00:23:45.714 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:45.714 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:45.714 "hdgst": false, 00:23:45.714 "ddgst": false 00:23:45.714 }, 00:23:45.714 "method": "bdev_nvme_attach_controller" 00:23:45.714 },{ 00:23:45.714 "params": { 00:23:45.714 "name": "Nvme4", 00:23:45.714 "trtype": "tcp", 00:23:45.714 "traddr": "10.0.0.2", 00:23:45.714 "adrfam": "ipv4", 00:23:45.714 "trsvcid": "4420", 00:23:45.714 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:45.714 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:45.714 "hdgst": false, 00:23:45.714 "ddgst": false 00:23:45.714 }, 00:23:45.714 "method": "bdev_nvme_attach_controller" 00:23:45.714 },{ 00:23:45.714 "params": { 00:23:45.714 "name": "Nvme5", 00:23:45.714 "trtype": "tcp", 00:23:45.714 "traddr": "10.0.0.2", 00:23:45.714 "adrfam": "ipv4", 00:23:45.714 "trsvcid": "4420", 00:23:45.714 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:45.714 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:45.714 "hdgst": false, 00:23:45.714 "ddgst": false 00:23:45.714 }, 00:23:45.714 "method": "bdev_nvme_attach_controller" 00:23:45.714 },{ 00:23:45.714 "params": { 00:23:45.714 "name": "Nvme6", 00:23:45.714 "trtype": "tcp", 00:23:45.714 "traddr": "10.0.0.2", 00:23:45.714 "adrfam": "ipv4", 00:23:45.714 "trsvcid": "4420", 00:23:45.714 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:45.714 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:45.714 "hdgst": false, 00:23:45.714 "ddgst": false 00:23:45.714 }, 00:23:45.714 "method": "bdev_nvme_attach_controller" 00:23:45.714 },{ 00:23:45.714 "params": { 00:23:45.714 "name": "Nvme7", 00:23:45.714 "trtype": "tcp", 00:23:45.714 "traddr": "10.0.0.2", 00:23:45.714 "adrfam": "ipv4", 00:23:45.714 "trsvcid": "4420", 00:23:45.714 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:45.714 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:45.714 "hdgst": false, 00:23:45.714 "ddgst": false 00:23:45.714 }, 00:23:45.714 "method": "bdev_nvme_attach_controller" 00:23:45.714 },{ 00:23:45.714 "params": { 00:23:45.714 "name": "Nvme8", 00:23:45.714 "trtype": "tcp", 00:23:45.714 "traddr": "10.0.0.2", 00:23:45.714 "adrfam": "ipv4", 00:23:45.714 "trsvcid": "4420", 00:23:45.714 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:45.714 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:45.714 "hdgst": false, 00:23:45.714 "ddgst": false 00:23:45.714 }, 00:23:45.714 "method": "bdev_nvme_attach_controller" 00:23:45.714 },{ 00:23:45.714 "params": { 00:23:45.714 "name": "Nvme9", 00:23:45.714 "trtype": "tcp", 00:23:45.714 "traddr": "10.0.0.2", 00:23:45.714 "adrfam": "ipv4", 00:23:45.714 "trsvcid": "4420", 00:23:45.714 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:45.714 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:45.714 "hdgst": false, 00:23:45.714 "ddgst": false 00:23:45.714 }, 00:23:45.714 "method": "bdev_nvme_attach_controller" 00:23:45.714 },{ 00:23:45.714 "params": { 00:23:45.714 "name": "Nvme10", 00:23:45.714 "trtype": "tcp", 00:23:45.714 "traddr": "10.0.0.2", 00:23:45.714 "adrfam": "ipv4", 00:23:45.714 "trsvcid": "4420", 00:23:45.714 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:45.714 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:45.714 "hdgst": false, 00:23:45.714 "ddgst": false 00:23:45.714 }, 00:23:45.714 "method": "bdev_nvme_attach_controller" 00:23:45.714 }' 00:23:45.714 [2024-12-09 10:34:17.979251] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:23:45.714 [2024-12-09 10:34:17.979337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591727 ] 00:23:45.714 [2024-12-09 10:34:18.049696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.714 [2024-12-09 10:34:18.110055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.607 Running I/O for 10 seconds... 00:23:47.607 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:47.607 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:23:47.607 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:47.607 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.607 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:47.864 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:48.122 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:48.122 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:48.122 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:48.122 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:48.122 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.122 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:48.122 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.122 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:48.122 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:48.122 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2591551 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2591551 ']' 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2591551 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2591551 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2591551' 00:23:48.395 killing process with pid 2591551 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2591551 00:23:48.395 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2591551 00:23:48.395 [2024-12-09 10:34:20.726447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726735] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.395 [2024-12-09 10:34:20.726844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.726998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46e70 is same with the state(6) to be set 00:23:48.396 [2024-12-09 10:34:20.727329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.396 [2024-12-09 10:34:20.727935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.396 [2024-12-09 10:34:20.727948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.727964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.727977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.727992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.728984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.728997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.729012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.729025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.729040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.729053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.729068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.729082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.397 [2024-12-09 10:34:20.729097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.397 [2024-12-09 10:34:20.729111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.398 [2024-12-09 10:34:20.729126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.398 [2024-12-09 10:34:20.729152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.398 [2024-12-09 10:34:20.729172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.398 [2024-12-09 10:34:20.729187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.398 [2024-12-09 10:34:20.729202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.398 [2024-12-09 10:34:20.729203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.398 [2024-12-09 10:34:20.729226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.398 [2024-12-09 10:34:20.729238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.398 [2024-12-09 10:34:20.729251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.729988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.730000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.730012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.730024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.730036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6200 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.398 [2024-12-09 10:34:20.732476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.732999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:48.399 [2024-12-09 10:34:20.733366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b66d0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.733439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4310 (9): Bad file descriptor 00:23:48.399 [2024-12-09 10:34:20.735417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.399 [2024-12-09 10:34:20.735450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d4310 with addr=10.0.0.2, port=4420 00:23:48.399 [2024-12-09 10:34:20.735473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4310 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.735527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.399 [2024-12-09 10:34:20.735506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with [2024-12-09 10:34:20.735559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:48.399 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.399 [2024-12-09 10:34:20.735580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.399 [2024-12-09 10:34:20.735582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.735593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.399 [2024-12-09 10:34:20.735600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.735607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.399 [2024-12-09 10:34:20.735613] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.735620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.399 [2024-12-09 10:34:20.735625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.735634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.399 [2024-12-09 10:34:20.735639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.735647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.399 [2024-12-09 10:34:20.735651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.735660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d3e80 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.735663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.399 [2024-12-09 10:34:20.735675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.400 [2024-12-09 10:34:20.735741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with [2024-12-09 10:34:20.735753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:48.400 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.735774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with [2024-12-09 10:34:20.735776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(6) to be set 00:23:48.400 id:0 cdw10:00000000 cdw11:00000000 00:23:48.400 [2024-12-09 10:34:20.735790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with [2024-12-09 10:34:20.735791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:48.400 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.735803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.400 [2024-12-09 10:34:20.735815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.735828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.400 [2024-12-09 10:34:20.735840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.735853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd31d60 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.400 [2024-12-09 10:34:20.735951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.735963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-12-09 10:34:20.735975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:48.400 the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.735993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.735999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.400 [2024-12-09 10:34:20.736012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.736025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.400 [2024-12-09 10:34:20.736037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.736050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8130 is same w[2024-12-09 10:34:20.736063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with ith the state(6) to be set 00:23:48.400 the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.400 [2024-12-09 10:34:20.736129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.736161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.400 [2024-12-09 10:34:20.736173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.736186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.400 [2024-12-09 10:34:20.736202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.736215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-09 10:34:20.736227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:48.400 the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with [2024-12-09 10:34:20.736241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:48.400 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.400 [2024-12-09 10:34:20.736254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9ea0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.400 [2024-12-09 10:34:20.736331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.736342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.736354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.736365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.736376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.736388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b6bc0 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.736933] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:48.401 [2024-12-09 10:34:20.736972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4310 (9): Bad file descriptor 00:23:48.401 [2024-12-09 10:34:20.737053] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:48.401 [2024-12-09 10:34:20.737515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:48.401 [2024-12-09 10:34:20.737516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:48.401 [2024-12-09 10:34:20.737547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:48.401 [2024-12-09 10:34:20.737568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:48.401 [2024-12-09 10:34:20.737582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.737995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738117] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:48.401 [2024-12-09 10:34:20.738150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.738344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7090 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.739066] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:48.401 [2024-12-09 10:34:20.739995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.740028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.740044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.740056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.740068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.740080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.740092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.401 [2024-12-09 10:34:20.740103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740607] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.740779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7410 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.402 [2024-12-09 10:34:20.747707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.747997] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.748009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.748020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.748032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.748043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.748054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.748066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.748077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.748089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.748103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7790 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.748272] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:48.403 [2024-12-09 10:34:20.749163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.403 [2024-12-09 10:34:20.749740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.749752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7c60 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.750990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.751001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.751013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.751024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.751036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.751051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.751063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.751075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.751087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.751098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.751109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 [2024-12-09 10:34:20.751121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e469a0 is same with the state(6) to be set 00:23:48.404 task offset: 28160 on job bdev=Nvme1n1 fails 00:23:48.404 1743.75 IOPS, 108.98 MiB/s [2024-12-09T09:34:20.845Z] [2024-12-09 10:34:20.756486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d3e80 (9): Bad file descriptor 00:23:48.404 [2024-12-09 10:34:20.756565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.404 [2024-12-09 10:34:20.756589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.404 [2024-12-09 10:34:20.756606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.404 [2024-12-09 10:34:20.756620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.404 [2024-12-09 10:34:20.756634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.404 [2024-12-09 10:34:20.756647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.404 [2024-12-09 10:34:20.756661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.404 [2024-12-09 10:34:20.756674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.756687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff5c0 is same with the state(6) to be set 00:23:48.405 [2024-12-09 10:34:20.756736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.756756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.756771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.756784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.756798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.756811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.756824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.756837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.756849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c110 is same with the state(6) to be set 00:23:48.405 [2024-12-09 10:34:20.756883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd31d60 (9): Bad file descriptor 00:23:48.405 [2024-12-09 10:34:20.756938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.756959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.756973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.756986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.757000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.757013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.757026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.757039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.757052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4b20 is same with the state(6) to be set 00:23:48.405 [2024-12-09 10:34:20.757102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.757133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.757166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.757182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.757196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.757209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.757223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.757236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.757248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd40de0 is same with the state(6) to be set 00:23:48.405 [2024-12-09 10:34:20.757296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.757316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.757331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.757345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.757359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.757372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.757392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.405 [2024-12-09 10:34:20.757406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.757430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd419e0 is same with the state(6) to be set 00:23:48.405 [2024-12-09 10:34:20.757460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c8130 (9): Bad file descriptor 00:23:48.405 [2024-12-09 10:34:20.757494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c9ea0 (9): Bad file descriptor 00:23:48.405 [2024-12-09 10:34:20.758464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:48.405 [2024-12-09 10:34:20.758666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.758690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.758714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.758729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.758747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.758761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.758777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.758791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.758806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.758820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.758836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.758849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.758865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.758879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.758895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.758909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.758926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.758939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.758955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.758968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.758989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.759004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.759019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.759032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.759048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.759062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.759077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.759091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.759106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.759120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.759135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.759159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.759176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.759190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.759206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.759219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.759235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.405 [2024-12-09 10:34:20.759249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.405 [2024-12-09 10:34:20.759264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.759976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.759989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.406 [2024-12-09 10:34:20.760425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.406 [2024-12-09 10:34:20.760438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.760453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.760467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.760775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.407 [2024-12-09 10:34:20.760804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d4310 with addr=10.0.0.2, port=4420 00:23:48.407 [2024-12-09 10:34:20.760820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4310 is same with the state(6) to be set 00:23:48.407 [2024-12-09 10:34:20.760924] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:48.407 [2024-12-09 10:34:20.760994] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:48.407 [2024-12-09 10:34:20.762176] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:48.407 [2024-12-09 10:34:20.762213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:48.407 [2024-12-09 10:34:20.762248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd419e0 (9): Bad file descriptor 00:23:48.407 [2024-12-09 10:34:20.762271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4310 (9): Bad file descriptor 00:23:48.407 [2024-12-09 10:34:20.762389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:48.407 [2024-12-09 10:34:20.762411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:48.407 [2024-12-09 10:34:20.762427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:48.407 [2024-12-09 10:34:20.762443] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:48.407 [2024-12-09 10:34:20.762870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.407 [2024-12-09 10:34:20.762898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd419e0 with addr=10.0.0.2, port=4420 00:23:48.407 [2024-12-09 10:34:20.762914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd419e0 is same with the state(6) to be set 00:23:48.407 [2024-12-09 10:34:20.762986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd419e0 (9): Bad file descriptor 00:23:48.407 [2024-12-09 10:34:20.763064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:48.407 [2024-12-09 10:34:20.763084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:48.407 [2024-12-09 10:34:20.763097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:48.407 [2024-12-09 10:34:20.763110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:48.407 [2024-12-09 10:34:20.766457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcff5c0 (9): Bad file descriptor 00:23:48.407 [2024-12-09 10:34:20.766506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83c110 (9): Bad file descriptor 00:23:48.407 [2024-12-09 10:34:20.766544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf4b20 (9): Bad file descriptor 00:23:48.407 [2024-12-09 10:34:20.766578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd40de0 (9): Bad file descriptor 00:23:48.407 [2024-12-09 10:34:20.766745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.766770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.766798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.766813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.766840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.766855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.766871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.766884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.766900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.766913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.766929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.766942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.766957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.766970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.766987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.407 [2024-12-09 10:34:20.767589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.407 [2024-12-09 10:34:20.767609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.767976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.767993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.408 [2024-12-09 10:34:20.768684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.408 [2024-12-09 10:34:20.768698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.768713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xad9290 is same with the state(6) to be set 00:23:48.409 [2024-12-09 10:34:20.769969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.769996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.770978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.770994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.771007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.771022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.771035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.771051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.771064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.409 [2024-12-09 10:34:20.771079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.409 [2024-12-09 10:34:20.771092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.771867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.771881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc8ab0 is same with the state(6) to be set 00:23:48.410 [2024-12-09 10:34:20.773115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.410 [2024-12-09 10:34:20.773474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.410 [2024-12-09 10:34:20.773489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.773982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.773995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.411 [2024-12-09 10:34:20.774664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.411 [2024-12-09 10:34:20.774680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.774694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.774709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.774722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.774738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.774752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.774769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.774783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.774798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.774812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.774827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.774841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.774856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.774870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.774885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.774899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.774914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.774928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.774947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.774962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.774983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.774998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.775014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.775027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.775041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd59b0 is same with the state(6) to be set 00:23:48.412 [2024-12-09 10:34:20.776316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.776976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.776990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.777005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.777018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.777033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.777047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.777062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.777076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.777091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.412 [2024-12-09 10:34:20.777104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.412 [2024-12-09 10:34:20.777120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.777984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.777998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.778013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.778026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.778042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.778056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.778071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.778088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.778104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.778118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.778133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.778154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.413 [2024-12-09 10:34:20.778172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.413 [2024-12-09 10:34:20.778185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.778200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.778214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.778228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd450 is same with the state(6) to be set 00:23:48.414 [2024-12-09 10:34:20.779432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:48.414 [2024-12-09 10:34:20.779464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:48.414 [2024-12-09 10:34:20.779486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:48.414 [2024-12-09 10:34:20.779506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:48.414 [2024-12-09 10:34:20.779932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.414 [2024-12-09 10:34:20.779962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d3e80 with addr=10.0.0.2, port=4420 00:23:48.414 [2024-12-09 10:34:20.779979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d3e80 is same with the state(6) to be set 00:23:48.414 [2024-12-09 10:34:20.780072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.414 [2024-12-09 10:34:20.780097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c9ea0 with addr=10.0.0.2, port=4420 00:23:48.414 [2024-12-09 10:34:20.780113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9ea0 is same with the state(6) to be set 00:23:48.414 [2024-12-09 10:34:20.780209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.414 [2024-12-09 10:34:20.780234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c8130 with addr=10.0.0.2, port=4420 00:23:48.414 [2024-12-09 10:34:20.780250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8130 is same with the state(6) to be set 00:23:48.414 [2024-12-09 10:34:20.780331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.414 [2024-12-09 10:34:20.780355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd31d60 with addr=10.0.0.2, port=4420 00:23:48.414 [2024-12-09 10:34:20.780370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd31d60 is same with the state(6) to be set 00:23:48.414 [2024-12-09 10:34:20.781226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.781972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.781987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.782004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.782021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.782035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.782050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.782064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.782080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.782093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.782109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.782123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.414 [2024-12-09 10:34:20.782145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.414 [2024-12-09 10:34:20.782161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.782975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.782988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.783004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.783017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.783032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.783046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.783062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.783075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.783090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.783103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.783119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.783136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.783163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd6b20 is same with the state(6) to be set 00:23:48.415 [2024-12-09 10:34:20.784409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.784432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.784452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.784467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.784482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.784496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.784512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.784525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.784541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.784555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.784571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.415 [2024-12-09 10:34:20.784585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.415 [2024-12-09 10:34:20.784601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.784973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.784988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.416 [2024-12-09 10:34:20.785763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.416 [2024-12-09 10:34:20.785777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.785792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.785806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.785822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.785835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.785851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.785864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.785884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.785898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.785913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.785927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.785943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.785956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.785971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.785985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.786013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.786042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.786070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.786099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.786127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.786164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.786193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.786221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.786254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.786283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.786312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.786326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd7d70 is same with the state(6) to be set 00:23:48.417 [2024-12-09 10:34:20.787575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.787977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.787991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.788006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.788019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.788035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.788049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.417 [2024-12-09 10:34:20.788064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.417 [2024-12-09 10:34:20.788078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.788970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.788990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.789004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.789020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.789033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.789049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.789062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.789078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.789091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.789107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.789129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.789153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.789168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.789184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.789198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.789213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.789226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.789241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.418 [2024-12-09 10:34:20.789254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.418 [2024-12-09 10:34:20.789269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.789282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.789297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.789310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.789326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.789339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.789354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.789371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.789387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.789401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.789416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.789430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.789445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.789458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.789473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.789486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.789500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcd9030 is same with the state(6) to be set 00:23:48.419 [2024-12-09 10:34:20.790730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.790753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.790773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.790787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.790803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.790816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.790832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.790845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.790861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.790874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.790889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.790902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.790917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.790930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.790945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.790958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.790979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.790993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.419 [2024-12-09 10:34:20.791627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.419 [2024-12-09 10:34:20.791640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.791980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.791993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.420 [2024-12-09 10:34:20.792629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.420 [2024-12-09 10:34:20.792643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcdb600 is same with the state(6) to be set 00:23:48.420 [2024-12-09 10:34:20.795028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:48.420 [2024-12-09 10:34:20.795064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:48.420 [2024-12-09 10:34:20.795094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:48.420 [2024-12-09 10:34:20.795117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:48.420 [2024-12-09 10:34:20.795149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:48.420 [2024-12-09 10:34:20.795241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d3e80 (9): Bad file descriptor 00:23:48.420 [2024-12-09 10:34:20.795266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c9ea0 (9): Bad file descriptor 00:23:48.420 [2024-12-09 10:34:20.795285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c8130 (9): Bad file descriptor 00:23:48.420 [2024-12-09 10:34:20.795303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd31d60 (9): Bad file descriptor 00:23:48.420 [2024-12-09 10:34:20.795362] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:23:48.420 [2024-12-09 10:34:20.795386] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:48.420 [2024-12-09 10:34:20.795405] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:48.420 [2024-12-09 10:34:20.795426] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:23:48.420 [2024-12-09 10:34:20.795451] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:23:48.421 00:23:48.421 Latency(us) 00:23:48.421 [2024-12-09T09:34:20.862Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.421 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.421 Job: Nvme1n1 ended in about 0.99 seconds with error 00:23:48.421 Verification LBA range: start 0x0 length 0x400 00:23:48.421 Nvme1n1 : 0.99 193.75 12.11 64.58 0.00 245121.80 5971.06 257872.02 00:23:48.421 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.421 Job: Nvme2n1 ended in about 1.03 seconds with error 00:23:48.421 Verification LBA range: start 0x0 length 0x400 00:23:48.421 Nvme2n1 : 1.03 124.44 7.78 62.22 0.00 333441.45 22136.60 281173.71 00:23:48.421 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.421 Job: Nvme3n1 ended in about 1.03 seconds with error 00:23:48.421 Verification LBA range: start 0x0 length 0x400 00:23:48.421 Nvme3n1 : 1.03 186.09 11.63 62.03 0.00 246226.87 19709.35 233016.89 00:23:48.421 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.421 Job: Nvme4n1 ended in about 1.03 seconds with error 00:23:48.421 Verification LBA range: start 0x0 length 0x400 00:23:48.421 Nvme4n1 : 1.03 189.38 11.84 61.84 0.00 238696.49 14660.65 257872.02 00:23:48.421 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.421 Job: Nvme5n1 ended in about 1.04 seconds with error 00:23:48.421 Verification LBA range: start 0x0 length 0x400 00:23:48.421 Nvme5n1 : 1.04 188.87 11.80 61.36 0.00 235301.77 12233.39 251658.24 00:23:48.421 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.421 Job: Nvme6n1 ended in about 1.05 seconds with error 00:23:48.421 Verification LBA range: start 0x0 length 0x400 00:23:48.421 Nvme6n1 : 1.05 188.30 11.77 61.17 0.00 231640.70 24175.50 233016.89 00:23:48.421 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.421 Job: Nvme7n1 ended in about 1.05 seconds with error 00:23:48.421 Verification LBA range: start 0x0 length 0x400 00:23:48.421 Nvme7n1 : 1.05 182.97 11.44 60.99 0.00 232420.50 19806.44 256318.58 00:23:48.421 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.421 Job: Nvme8n1 ended in about 1.02 seconds with error 00:23:48.421 Verification LBA range: start 0x0 length 0x400 00:23:48.421 Nvme8n1 : 1.02 191.98 12.00 58.77 0.00 220760.46 2949.12 257872.02 00:23:48.421 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.421 Job: Nvme9n1 ended in about 1.05 seconds with error 00:23:48.421 Verification LBA range: start 0x0 length 0x400 00:23:48.421 Nvme9n1 : 1.05 121.62 7.60 60.81 0.00 299329.86 21359.88 274959.93 00:23:48.421 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:48.421 Job: Nvme10n1 ended in about 1.04 seconds with error 00:23:48.421 Verification LBA range: start 0x0 length 0x400 00:23:48.421 Nvme10n1 : 1.04 123.30 7.71 61.65 0.00 288671.92 21068.61 296708.17 00:23:48.421 [2024-12-09T09:34:20.862Z] =================================================================================================================== 00:23:48.421 [2024-12-09T09:34:20.862Z] Total : 1690.71 105.67 615.43 0.00 253001.58 2949.12 296708.17 00:23:48.679 [2024-12-09 10:34:20.824205] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:48.679 [2024-12-09 10:34:20.824277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:48.679 [2024-12-09 10:34:20.824499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.679 [2024-12-09 10:34:20.824535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d4310 with addr=10.0.0.2, port=4420 00:23:48.679 [2024-12-09 10:34:20.824555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d4310 is same with the state(6) to be set 00:23:48.679 [2024-12-09 10:34:20.824657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.679 [2024-12-09 10:34:20.824684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd419e0 with addr=10.0.0.2, port=4420 00:23:48.679 [2024-12-09 10:34:20.824701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd419e0 is same with the state(6) to be set 00:23:48.679 [2024-12-09 10:34:20.824791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.679 [2024-12-09 10:34:20.824817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcff5c0 with addr=10.0.0.2, port=4420 00:23:48.679 [2024-12-09 10:34:20.824833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcff5c0 is same with the state(6) to be set 00:23:48.680 [2024-12-09 10:34:20.824946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.680 [2024-12-09 10:34:20.824971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83c110 with addr=10.0.0.2, port=4420 00:23:48.680 [2024-12-09 10:34:20.824987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x83c110 is same with the state(6) to be set 00:23:48.680 [2024-12-09 10:34:20.825083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.680 [2024-12-09 10:34:20.825110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xcf4b20 with addr=10.0.0.2, port=4420 00:23:48.680 [2024-12-09 10:34:20.825126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf4b20 is same with the state(6) to be set 00:23:48.680 [2024-12-09 10:34:20.825160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.825177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.825193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:48.680 [2024-12-09 10:34:20.825211] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:48.680 [2024-12-09 10:34:20.825229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.825241] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.825254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:48.680 [2024-12-09 10:34:20.825266] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:48.680 [2024-12-09 10:34:20.825280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.825293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.825305] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:48.680 [2024-12-09 10:34:20.825317] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:48.680 [2024-12-09 10:34:20.825331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.825343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.825355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:48.680 [2024-12-09 10:34:20.825367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:48.680 [2024-12-09 10:34:20.826674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.680 [2024-12-09 10:34:20.826719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd40de0 with addr=10.0.0.2, port=4420 00:23:48.680 [2024-12-09 10:34:20.826736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd40de0 is same with the state(6) to be set 00:23:48.680 [2024-12-09 10:34:20.826761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d4310 (9): Bad file descriptor 00:23:48.680 [2024-12-09 10:34:20.826782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd419e0 (9): Bad file descriptor 00:23:48.680 [2024-12-09 10:34:20.826800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcff5c0 (9): Bad file descriptor 00:23:48.680 [2024-12-09 10:34:20.826817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x83c110 (9): Bad file descriptor 00:23:48.680 [2024-12-09 10:34:20.826835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf4b20 (9): Bad file descriptor 00:23:48.680 [2024-12-09 10:34:20.826924] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:48.680 [2024-12-09 10:34:20.826949] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:23:48.680 [2024-12-09 10:34:20.826969] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:23:48.680 [2024-12-09 10:34:20.826989] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:48.680 [2024-12-09 10:34:20.827007] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:48.680 [2024-12-09 10:34:20.827354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd40de0 (9): Bad file descriptor 00:23:48.680 [2024-12-09 10:34:20.827383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.827397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.827410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:48.680 [2024-12-09 10:34:20.827424] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:48.680 [2024-12-09 10:34:20.827438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.827449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.827462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:48.680 [2024-12-09 10:34:20.827473] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:48.680 [2024-12-09 10:34:20.827487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.827499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.827511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:48.680 [2024-12-09 10:34:20.827522] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:48.680 [2024-12-09 10:34:20.827535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.827547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.827559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:48.680 [2024-12-09 10:34:20.827576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:48.680 [2024-12-09 10:34:20.827590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.827601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.827613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:48.680 [2024-12-09 10:34:20.827625] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:48.680 [2024-12-09 10:34:20.827708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:48.680 [2024-12-09 10:34:20.827732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:48.680 [2024-12-09 10:34:20.827749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:48.680 [2024-12-09 10:34:20.827765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:48.680 [2024-12-09 10:34:20.827809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.827824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.827838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:48.680 [2024-12-09 10:34:20.827850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:48.680 [2024-12-09 10:34:20.827962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.680 [2024-12-09 10:34:20.827988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd31d60 with addr=10.0.0.2, port=4420 00:23:48.680 [2024-12-09 10:34:20.828004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd31d60 is same with the state(6) to be set 00:23:48.680 [2024-12-09 10:34:20.828094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.680 [2024-12-09 10:34:20.828119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c8130 with addr=10.0.0.2, port=4420 00:23:48.680 [2024-12-09 10:34:20.828134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c8130 is same with the state(6) to be set 00:23:48.680 [2024-12-09 10:34:20.828222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.680 [2024-12-09 10:34:20.828246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8c9ea0 with addr=10.0.0.2, port=4420 00:23:48.680 [2024-12-09 10:34:20.828262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9ea0 is same with the state(6) to be set 00:23:48.680 [2024-12-09 10:34:20.828347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:48.680 [2024-12-09 10:34:20.828372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d3e80 with addr=10.0.0.2, port=4420 00:23:48.680 [2024-12-09 10:34:20.828387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8d3e80 is same with the state(6) to be set 00:23:48.680 [2024-12-09 10:34:20.828430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd31d60 (9): Bad file descriptor 00:23:48.680 [2024-12-09 10:34:20.828454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c8130 (9): Bad file descriptor 00:23:48.680 [2024-12-09 10:34:20.828472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c9ea0 (9): Bad file descriptor 00:23:48.680 [2024-12-09 10:34:20.828490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8d3e80 (9): Bad file descriptor 00:23:48.680 [2024-12-09 10:34:20.828535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.828554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.828568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:48.680 [2024-12-09 10:34:20.828580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:48.680 [2024-12-09 10:34:20.828594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:48.680 [2024-12-09 10:34:20.828606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:48.680 [2024-12-09 10:34:20.828618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:48.681 [2024-12-09 10:34:20.828630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:48.681 [2024-12-09 10:34:20.828643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:48.681 [2024-12-09 10:34:20.828655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:48.681 [2024-12-09 10:34:20.828667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:48.681 [2024-12-09 10:34:20.828678] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:48.681 [2024-12-09 10:34:20.828691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:48.681 [2024-12-09 10:34:20.828703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:48.681 [2024-12-09 10:34:20.828716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:48.681 [2024-12-09 10:34:20.828727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:48.940 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2591727 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2591727 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2591727 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:49.878 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.136 rmmod nvme_tcp 00:23:50.136 rmmod nvme_fabrics 00:23:50.136 rmmod nvme_keyring 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2591551 ']' 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2591551 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2591551 ']' 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2591551 00:23:50.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2591551) - No such process 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2591551 is not found' 00:23:50.136 Process with pid 2591551 is not found 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:50.136 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:52.034 00:23:52.034 real 0m7.582s 00:23:52.034 user 0m18.931s 00:23:52.034 sys 0m1.458s 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:52.034 ************************************ 00:23:52.034 END TEST nvmf_shutdown_tc3 00:23:52.034 ************************************ 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:52.034 ************************************ 00:23:52.034 START TEST nvmf_shutdown_tc4 00:23:52.034 ************************************ 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.034 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:52.293 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.293 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:52.294 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:52.294 Found net devices under 0000:09:00.0: cvl_0_0 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:52.294 Found net devices under 0000:09:00.1: cvl_0_1 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.294 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.294 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:23:52.294 00:23:52.294 --- 10.0.0.2 ping statistics --- 00:23:52.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.294 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.294 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.294 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:23:52.294 00:23:52.294 --- 10.0.0.1 ping statistics --- 00:23:52.294 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.294 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2592521 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2592521 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2592521 ']' 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.294 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.294 [2024-12-09 10:34:24.712353] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:23:52.294 [2024-12-09 10:34:24.712441] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.553 [2024-12-09 10:34:24.788244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.553 [2024-12-09 10:34:24.844201] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.553 [2024-12-09 10:34:24.844258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.553 [2024-12-09 10:34:24.844285] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.553 [2024-12-09 10:34:24.844296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.553 [2024-12-09 10:34:24.844306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.553 [2024-12-09 10:34:24.845765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.553 [2024-12-09 10:34:24.845891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.553 [2024-12-09 10:34:24.846005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:52.553 [2024-12-09 10:34:24.846009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.553 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.553 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:23:52.553 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.553 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:52.553 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.553 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.553 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:52.553 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.553 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.553 [2024-12-09 10:34:24.987256] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.811 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.811 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:52.811 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:52.811 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.811 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.811 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:52.811 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.811 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.811 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.811 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.811 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.811 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.811 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.811 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.811 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.811 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.812 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:52.812 Malloc1 00:23:52.812 [2024-12-09 10:34:25.086366] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.812 Malloc2 00:23:52.812 Malloc3 00:23:52.812 Malloc4 00:23:53.069 Malloc5 00:23:53.069 Malloc6 00:23:53.069 Malloc7 00:23:53.069 Malloc8 00:23:53.069 Malloc9 00:23:53.326 Malloc10 00:23:53.326 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.326 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:53.326 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.326 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:53.326 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2592695 00:23:53.326 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:53.326 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:53.326 [2024-12-09 10:34:25.629544] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2592521 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2592521 ']' 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2592521 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2592521 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2592521' 00:23:58.596 killing process with pid 2592521 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2592521 00:23:58.596 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2592521 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 starting I/O failed: -6 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 starting I/O failed: -6 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 starting I/O failed: -6 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 [2024-12-09 10:34:30.624963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aa90 is same with the state(6) to be set 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 [2024-12-09 10:34:30.625032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aa90 is same with the state(6) to be set 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 [2024-12-09 10:34:30.625048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aa90 is same with the state(6) to be set 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 [2024-12-09 10:34:30.625066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aa90 is same with the state(6) to be set 00:23:58.596 starting I/O failed: -6 00:23:58.596 [2024-12-09 10:34:30.625079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aa90 is same with the state(6) to be set 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 [2024-12-09 10:34:30.625091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167aa90 is same with the state(6) to be set 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 starting I/O failed: -6 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 starting I/O failed: -6 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 starting I/O failed: -6 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 starting I/O failed: -6 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 starting I/O failed: -6 00:23:58.596 Write completed with error (sct=0, sc=8) 00:23:58.596 [2024-12-09 10:34:30.625663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6310 is same with the state(6) to be set 00:23:58.596 [2024-12-09 10:34:30.625695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6310 is same with the state(6) to be set 00:23:58.596 [2024-12-09 10:34:30.625711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6310 is same with the state(6) to be set 00:23:58.596 [2024-12-09 10:34:30.625725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6310 is same with the state(6) to be set 00:23:58.596 [2024-12-09 10:34:30.625737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6310 is same with the state(6) to be set 00:23:58.596 [2024-12-09 10:34:30.625696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.596 [2024-12-09 10:34:30.625749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6310 is same with the state(6) to be set 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 [2024-12-09 10:34:30.625909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d67e0 is same with the state(6) to be set 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 [2024-12-09 10:34:30.625943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d67e0 is same with the state(6) to be set 00:23:58.597 starting I/O failed: -6 00:23:58.597 [2024-12-09 10:34:30.625958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d67e0 is same with the state(6) to be set 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 [2024-12-09 10:34:30.625971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d67e0 is same with the state(6) to be set 00:23:58.597 starting I/O failed: -6 00:23:58.597 [2024-12-09 10:34:30.625990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d67e0 is same with the state(6) to be set 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 [2024-12-09 10:34:30.626010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d67e0 is same with the state(6) to be set 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 [2024-12-09 10:34:30.626025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d67e0 is same with the state(6) to be set 00:23:58.597 [2024-12-09 10:34:30.626037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d67e0 is same with Write completed with error (sct=0, sc=8) 00:23:58.597 the state(6) to be set 00:23:58.597 starting I/O failed: -6 00:23:58.597 [2024-12-09 10:34:30.626050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d67e0 is same with the state(6) to be set 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 [2024-12-09 10:34:30.626924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 [2024-12-09 10:34:30.628108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 [2024-12-09 10:34:30.629906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.597 NVMe io qpair process completion error 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 [2024-12-09 10:34:30.631204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.597 Write completed with error (sct=0, sc=8) 00:23:58.597 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 [2024-12-09 10:34:30.632285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 [2024-12-09 10:34:30.633443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 [2024-12-09 10:34:30.635416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.598 NVMe io qpair process completion error 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 [2024-12-09 10:34:30.640323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 [2024-12-09 10:34:30.641321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.598 Write completed with error (sct=0, sc=8) 00:23:58.598 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 [2024-12-09 10:34:30.642526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 [2024-12-09 10:34:30.644235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.599 NVMe io qpair process completion error 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 [2024-12-09 10:34:30.645453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 [2024-12-09 10:34:30.646558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 [2024-12-09 10:34:30.647759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.599 starting I/O failed: -6 00:23:58.599 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 [2024-12-09 10:34:30.649531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.600 NVMe io qpair process completion error 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 [2024-12-09 10:34:30.650818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 [2024-12-09 10:34:30.651872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 [2024-12-09 10:34:30.653060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.600 Write completed with error (sct=0, sc=8) 00:23:58.600 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 [2024-12-09 10:34:30.655295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.601 NVMe io qpair process completion error 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 [2024-12-09 10:34:30.656560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 [2024-12-09 10:34:30.657634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 [2024-12-09 10:34:30.658793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 [2024-12-09 10:34:30.662026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.601 NVMe io qpair process completion error 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 [2024-12-09 10:34:30.663272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 [2024-12-09 10:34:30.664264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.601 Write completed with error (sct=0, sc=8) 00:23:58.601 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 [2024-12-09 10:34:30.665499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 [2024-12-09 10:34:30.668878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.602 NVMe io qpair process completion error 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 [2024-12-09 10:34:30.670204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 [2024-12-09 10:34:30.671251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 [2024-12-09 10:34:30.672456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 [2024-12-09 10:34:30.674384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:58.602 NVMe io qpair process completion error 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 Write completed with error (sct=0, sc=8) 00:23:58.602 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.603 starting I/O failed: -6 00:23:58.603 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Write completed with error (sct=0, sc=8) 00:23:58.604 starting I/O failed: -6 00:23:58.604 Initializing NVMe Controllers 00:23:58.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:58.604 Controller IO queue size 128, less than required. 00:23:58.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:58.604 Controller IO queue size 128, less than required. 00:23:58.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:58.604 Controller IO queue size 128, less than required. 00:23:58.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:58.604 Controller IO queue size 128, less than required. 00:23:58.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:58.604 Controller IO queue size 128, less than required. 00:23:58.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:58.604 Controller IO queue size 128, less than required. 00:23:58.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:58.604 Controller IO queue size 128, less than required. 00:23:58.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:58.604 Controller IO queue size 128, less than required. 00:23:58.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:58.604 Controller IO queue size 128, less than required. 00:23:58.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.604 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:58.604 Controller IO queue size 128, less than required. 00:23:58.604 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:58.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:58.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:58.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:58.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:58.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:58.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:58.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:58.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:58.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:58.604 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:58.604 Initialization complete. Launching workers. 00:23:58.604 ======================================================== 00:23:58.604 Latency(us) 00:23:58.604 Device Information : IOPS MiB/s Average min max 00:23:58.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1875.66 80.59 68264.19 841.51 118987.67 00:23:58.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1812.56 77.88 70664.93 864.26 119345.91 00:23:58.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1833.89 78.80 69868.73 953.91 122567.54 00:23:58.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1871.75 80.43 68488.25 1129.15 115256.90 00:23:58.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1857.82 79.83 69048.69 1012.44 114901.05 00:23:58.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1826.49 78.48 70282.07 895.90 133486.93 00:23:58.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1788.63 76.86 71790.45 1071.38 136775.14 00:23:58.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1820.83 78.24 69713.12 1096.42 114076.15 00:23:58.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1821.27 78.26 69722.01 891.82 115927.30 00:23:58.604 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1868.70 80.30 68616.94 583.26 116747.94 00:23:58.604 ======================================================== 00:23:58.604 Total : 18377.60 789.66 69630.87 583.26 136775.14 00:23:58.604 00:23:58.604 [2024-12-09 10:34:30.689209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe532c0 is same with the state(6) to be set 00:23:58.604 [2024-12-09 10:34:30.689308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe529e0 is same with the state(6) to be set 00:23:58.604 [2024-12-09 10:34:30.689368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe52d10 is same with the state(6) to be set 00:23:58.604 [2024-12-09 10:34:30.689426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe535f0 is same with the state(6) to be set 00:23:58.604 [2024-12-09 10:34:30.689505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe526b0 is same with the state(6) to be set 00:23:58.604 [2024-12-09 10:34:30.689563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe53c50 is same with the state(6) to be set 00:23:58.604 [2024-12-09 10:34:30.689626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe54900 is same with the state(6) to be set 00:23:58.604 [2024-12-09 10:34:30.689685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe54720 is same with the state(6) to be set 00:23:58.604 [2024-12-09 10:34:30.689740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe54ae0 is same with the state(6) to be set 00:23:58.604 [2024-12-09 10:34:30.689796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe53920 is same with the state(6) to be set 00:23:58.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:58.864 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:59.880 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2592695 00:23:59.880 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:23:59.880 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2592695 00:23:59.880 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:23:59.880 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.880 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:23:59.880 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2592695 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:59.881 rmmod nvme_tcp 00:23:59.881 rmmod nvme_fabrics 00:23:59.881 rmmod nvme_keyring 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2592521 ']' 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2592521 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2592521 ']' 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2592521 00:23:59.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2592521) - No such process 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2592521 is not found' 00:23:59.881 Process with pid 2592521 is not found 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.881 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:02.417 00:24:02.417 real 0m9.809s 00:24:02.417 user 0m23.769s 00:24:02.417 sys 0m5.658s 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:02.417 ************************************ 00:24:02.417 END TEST nvmf_shutdown_tc4 00:24:02.417 ************************************ 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:02.417 00:24:02.417 real 0m38.027s 00:24:02.417 user 1m43.879s 00:24:02.417 sys 0m12.188s 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:02.417 ************************************ 00:24:02.417 END TEST nvmf_shutdown 00:24:02.417 ************************************ 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:02.417 ************************************ 00:24:02.417 START TEST nvmf_nsid 00:24:02.417 ************************************ 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:02.417 * Looking for test storage... 00:24:02.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.417 --rc genhtml_branch_coverage=1 00:24:02.417 --rc genhtml_function_coverage=1 00:24:02.417 --rc genhtml_legend=1 00:24:02.417 --rc geninfo_all_blocks=1 00:24:02.417 --rc geninfo_unexecuted_blocks=1 00:24:02.417 00:24:02.417 ' 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.417 --rc genhtml_branch_coverage=1 00:24:02.417 --rc genhtml_function_coverage=1 00:24:02.417 --rc genhtml_legend=1 00:24:02.417 --rc geninfo_all_blocks=1 00:24:02.417 --rc geninfo_unexecuted_blocks=1 00:24:02.417 00:24:02.417 ' 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.417 --rc genhtml_branch_coverage=1 00:24:02.417 --rc genhtml_function_coverage=1 00:24:02.417 --rc genhtml_legend=1 00:24:02.417 --rc geninfo_all_blocks=1 00:24:02.417 --rc geninfo_unexecuted_blocks=1 00:24:02.417 00:24:02.417 ' 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:02.417 --rc genhtml_branch_coverage=1 00:24:02.417 --rc genhtml_function_coverage=1 00:24:02.417 --rc genhtml_legend=1 00:24:02.417 --rc geninfo_all_blocks=1 00:24:02.417 --rc geninfo_unexecuted_blocks=1 00:24:02.417 00:24:02.417 ' 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:02.417 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:02.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:02.418 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:04.318 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:04.318 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:04.318 Found net devices under 0000:09:00.0: cvl_0_0 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:04.318 Found net devices under 0000:09:00.1: cvl_0_1 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:04.318 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:04.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:24:04.577 00:24:04.577 --- 10.0.0.2 ping statistics --- 00:24:04.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.577 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:24:04.577 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:04.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:24:04.578 00:24:04.578 --- 10.0.0.1 ping statistics --- 00:24:04.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.578 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2595443 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2595443 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2595443 ']' 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.578 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:04.578 [2024-12-09 10:34:36.846892] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:24:04.578 [2024-12-09 10:34:36.846978] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.578 [2024-12-09 10:34:36.921580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.578 [2024-12-09 10:34:36.979332] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.578 [2024-12-09 10:34:36.979395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.578 [2024-12-09 10:34:36.979409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.578 [2024-12-09 10:34:36.979420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.578 [2024-12-09 10:34:36.979430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.578 [2024-12-09 10:34:36.980041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2595469 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=9db7bbe5-471b-41e1-9145-1852d884c969 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=56d762eb-add0-43f6-8dfe-a234cdaa6fcf 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=14a12d1d-fec9-47b0-aa32-01042810e240 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.836 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:04.836 null0 00:24:04.836 null1 00:24:04.836 null2 00:24:04.836 [2024-12-09 10:34:37.172383] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.836 [2024-12-09 10:34:37.189758] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:24:04.837 [2024-12-09 10:34:37.189835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2595469 ] 00:24:04.837 [2024-12-09 10:34:37.196613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.837 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.837 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2595469 /var/tmp/tgt2.sock 00:24:04.837 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2595469 ']' 00:24:04.837 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:04.837 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.837 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:04.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:04.837 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.837 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:04.837 [2024-12-09 10:34:37.262302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.094 [2024-12-09 10:34:37.321379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.351 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:05.351 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:05.351 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:05.608 [2024-12-09 10:34:37.980249] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.608 [2024-12-09 10:34:37.996418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:05.608 nvme0n1 nvme0n2 00:24:05.608 nvme1n1 00:24:05.608 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:05.608 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:05.608 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:06.173 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:07.543 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 9db7bbe5-471b-41e1-9145-1852d884c969 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9db7bbe5471b41e191451852d884c969 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9DB7BBE5471B41E191451852D884C969 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 9DB7BBE5471B41E191451852D884C969 == \9\D\B\7\B\B\E\5\4\7\1\B\4\1\E\1\9\1\4\5\1\8\5\2\D\8\8\4\C\9\6\9 ]] 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 56d762eb-add0-43f6-8dfe-a234cdaa6fcf 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=56d762ebadd043f68dfea234cdaa6fcf 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 56D762EBADD043F68DFEA234CDAA6FCF 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 56D762EBADD043F68DFEA234CDAA6FCF == \5\6\D\7\6\2\E\B\A\D\D\0\4\3\F\6\8\D\F\E\A\2\3\4\C\D\A\A\6\F\C\F ]] 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 14a12d1d-fec9-47b0-aa32-01042810e240 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=14a12d1dfec947b0aa3201042810e240 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 14A12D1DFEC947B0AA3201042810E240 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 14A12D1DFEC947B0AA3201042810E240 == \1\4\A\1\2\D\1\D\F\E\C\9\4\7\B\0\A\A\3\2\0\1\0\4\2\8\1\0\E\2\4\0 ]] 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2595469 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2595469 ']' 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2595469 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.544 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2595469 00:24:07.801 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:07.801 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:07.801 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2595469' 00:24:07.801 killing process with pid 2595469 00:24:07.801 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2595469 00:24:07.801 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2595469 00:24:08.059 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:08.059 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:08.059 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:08.059 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:08.059 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:08.059 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:08.059 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:08.059 rmmod nvme_tcp 00:24:08.318 rmmod nvme_fabrics 00:24:08.318 rmmod nvme_keyring 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2595443 ']' 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2595443 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2595443 ']' 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2595443 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2595443 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2595443' 00:24:08.318 killing process with pid 2595443 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2595443 00:24:08.318 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2595443 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:08.577 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.486 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:10.486 00:24:10.486 real 0m8.525s 00:24:10.486 user 0m8.363s 00:24:10.486 sys 0m2.770s 00:24:10.486 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.486 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:10.486 ************************************ 00:24:10.486 END TEST nvmf_nsid 00:24:10.486 ************************************ 00:24:10.486 10:34:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:10.486 00:24:10.486 real 11m44.797s 00:24:10.486 user 27m43.337s 00:24:10.486 sys 2m51.869s 00:24:10.486 10:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.486 10:34:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:10.486 ************************************ 00:24:10.486 END TEST nvmf_target_extra 00:24:10.486 ************************************ 00:24:10.745 10:34:42 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:10.745 10:34:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.745 10:34:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.745 10:34:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:10.745 ************************************ 00:24:10.745 START TEST nvmf_host 00:24:10.745 ************************************ 00:24:10.745 10:34:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:10.745 * Looking for test storage... 00:24:10.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:10.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.745 --rc genhtml_branch_coverage=1 00:24:10.745 --rc genhtml_function_coverage=1 00:24:10.745 --rc genhtml_legend=1 00:24:10.745 --rc geninfo_all_blocks=1 00:24:10.745 --rc geninfo_unexecuted_blocks=1 00:24:10.745 00:24:10.745 ' 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:10.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.745 --rc genhtml_branch_coverage=1 00:24:10.745 --rc genhtml_function_coverage=1 00:24:10.745 --rc genhtml_legend=1 00:24:10.745 --rc geninfo_all_blocks=1 00:24:10.745 --rc geninfo_unexecuted_blocks=1 00:24:10.745 00:24:10.745 ' 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:10.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.745 --rc genhtml_branch_coverage=1 00:24:10.745 --rc genhtml_function_coverage=1 00:24:10.745 --rc genhtml_legend=1 00:24:10.745 --rc geninfo_all_blocks=1 00:24:10.745 --rc geninfo_unexecuted_blocks=1 00:24:10.745 00:24:10.745 ' 00:24:10.745 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:10.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.745 --rc genhtml_branch_coverage=1 00:24:10.745 --rc genhtml_function_coverage=1 00:24:10.745 --rc genhtml_legend=1 00:24:10.745 --rc geninfo_all_blocks=1 00:24:10.746 --rc geninfo_unexecuted_blocks=1 00:24:10.746 00:24:10.746 ' 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.746 ************************************ 00:24:10.746 START TEST nvmf_multicontroller 00:24:10.746 ************************************ 00:24:10.746 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:10.746 * Looking for test storage... 00:24:10.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:11.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.005 --rc genhtml_branch_coverage=1 00:24:11.005 --rc genhtml_function_coverage=1 00:24:11.005 --rc genhtml_legend=1 00:24:11.005 --rc geninfo_all_blocks=1 00:24:11.005 --rc geninfo_unexecuted_blocks=1 00:24:11.005 00:24:11.005 ' 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:11.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.005 --rc genhtml_branch_coverage=1 00:24:11.005 --rc genhtml_function_coverage=1 00:24:11.005 --rc genhtml_legend=1 00:24:11.005 --rc geninfo_all_blocks=1 00:24:11.005 --rc geninfo_unexecuted_blocks=1 00:24:11.005 00:24:11.005 ' 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:11.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.005 --rc genhtml_branch_coverage=1 00:24:11.005 --rc genhtml_function_coverage=1 00:24:11.005 --rc genhtml_legend=1 00:24:11.005 --rc geninfo_all_blocks=1 00:24:11.005 --rc geninfo_unexecuted_blocks=1 00:24:11.005 00:24:11.005 ' 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:11.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:11.005 --rc genhtml_branch_coverage=1 00:24:11.005 --rc genhtml_function_coverage=1 00:24:11.005 --rc genhtml_legend=1 00:24:11.005 --rc geninfo_all_blocks=1 00:24:11.005 --rc geninfo_unexecuted_blocks=1 00:24:11.005 00:24:11.005 ' 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.005 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:11.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:11.006 10:34:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.536 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:13.537 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:13.537 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:13.537 Found net devices under 0000:09:00.0: cvl_0_0 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:13.537 Found net devices under 0000:09:00.1: cvl_0_1 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:13.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:24:13.537 00:24:13.537 --- 10.0.0.2 ping statistics --- 00:24:13.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.537 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:24:13.537 00:24:13.537 --- 10.0.0.1 ping statistics --- 00:24:13.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.537 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2598025 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2598025 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2598025 ']' 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.537 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.537 [2024-12-09 10:34:45.637736] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:24:13.538 [2024-12-09 10:34:45.637832] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.538 [2024-12-09 10:34:45.711933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:13.538 [2024-12-09 10:34:45.769345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.538 [2024-12-09 10:34:45.769398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.538 [2024-12-09 10:34:45.769422] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.538 [2024-12-09 10:34:45.769433] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.538 [2024-12-09 10:34:45.769442] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.538 [2024-12-09 10:34:45.770912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.538 [2024-12-09 10:34:45.770983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.538 [2024-12-09 10:34:45.770987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.538 [2024-12-09 10:34:45.921707] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.538 Malloc0 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.538 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.796 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.796 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.796 [2024-12-09 10:34:45.980820] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.796 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.796 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:13.796 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.796 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.796 [2024-12-09 10:34:45.988637] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:13.796 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.796 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:13.796 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.796 10:34:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.796 Malloc1 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2598052 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2598052 /var/tmp/bdevperf.sock 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2598052 ']' 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.796 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.053 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.053 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:14.053 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:14.053 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.053 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.311 NVMe0n1 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.311 1 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.311 request: 00:24:14.311 { 00:24:14.311 "name": "NVMe0", 00:24:14.311 "trtype": "tcp", 00:24:14.311 "traddr": "10.0.0.2", 00:24:14.311 "adrfam": "ipv4", 00:24:14.311 "trsvcid": "4420", 00:24:14.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.311 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:14.311 "hostaddr": "10.0.0.1", 00:24:14.311 "prchk_reftag": false, 00:24:14.311 "prchk_guard": false, 00:24:14.311 "hdgst": false, 00:24:14.311 "ddgst": false, 00:24:14.311 "allow_unrecognized_csi": false, 00:24:14.311 "method": "bdev_nvme_attach_controller", 00:24:14.311 "req_id": 1 00:24:14.311 } 00:24:14.311 Got JSON-RPC error response 00:24:14.311 response: 00:24:14.311 { 00:24:14.311 "code": -114, 00:24:14.311 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:14.311 } 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.311 request: 00:24:14.311 { 00:24:14.311 "name": "NVMe0", 00:24:14.311 "trtype": "tcp", 00:24:14.311 "traddr": "10.0.0.2", 00:24:14.311 "adrfam": "ipv4", 00:24:14.311 "trsvcid": "4420", 00:24:14.311 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:14.311 "hostaddr": "10.0.0.1", 00:24:14.311 "prchk_reftag": false, 00:24:14.311 "prchk_guard": false, 00:24:14.311 "hdgst": false, 00:24:14.311 "ddgst": false, 00:24:14.311 "allow_unrecognized_csi": false, 00:24:14.311 "method": "bdev_nvme_attach_controller", 00:24:14.311 "req_id": 1 00:24:14.311 } 00:24:14.311 Got JSON-RPC error response 00:24:14.311 response: 00:24:14.311 { 00:24:14.311 "code": -114, 00:24:14.311 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:14.311 } 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.311 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.311 request: 00:24:14.311 { 00:24:14.311 "name": "NVMe0", 00:24:14.311 "trtype": "tcp", 00:24:14.311 "traddr": "10.0.0.2", 00:24:14.311 "adrfam": "ipv4", 00:24:14.311 "trsvcid": "4420", 00:24:14.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.312 "hostaddr": "10.0.0.1", 00:24:14.312 "prchk_reftag": false, 00:24:14.312 "prchk_guard": false, 00:24:14.312 "hdgst": false, 00:24:14.312 "ddgst": false, 00:24:14.312 "multipath": "disable", 00:24:14.312 "allow_unrecognized_csi": false, 00:24:14.312 "method": "bdev_nvme_attach_controller", 00:24:14.312 "req_id": 1 00:24:14.312 } 00:24:14.312 Got JSON-RPC error response 00:24:14.312 response: 00:24:14.312 { 00:24:14.312 "code": -114, 00:24:14.312 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:14.312 } 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.312 request: 00:24:14.312 { 00:24:14.312 "name": "NVMe0", 00:24:14.312 "trtype": "tcp", 00:24:14.312 "traddr": "10.0.0.2", 00:24:14.312 "adrfam": "ipv4", 00:24:14.312 "trsvcid": "4420", 00:24:14.312 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.312 "hostaddr": "10.0.0.1", 00:24:14.312 "prchk_reftag": false, 00:24:14.312 "prchk_guard": false, 00:24:14.312 "hdgst": false, 00:24:14.312 "ddgst": false, 00:24:14.312 "multipath": "failover", 00:24:14.312 "allow_unrecognized_csi": false, 00:24:14.312 "method": "bdev_nvme_attach_controller", 00:24:14.312 "req_id": 1 00:24:14.312 } 00:24:14.312 Got JSON-RPC error response 00:24:14.312 response: 00:24:14.312 { 00:24:14.312 "code": -114, 00:24:14.312 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:14.312 } 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.312 NVMe0n1 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.312 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.569 00:24:14.569 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.569 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:14.569 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:14.569 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.569 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.569 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.569 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:14.569 10:34:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.948 { 00:24:15.948 "results": [ 00:24:15.948 { 00:24:15.948 "job": "NVMe0n1", 00:24:15.948 "core_mask": "0x1", 00:24:15.948 "workload": "write", 00:24:15.948 "status": "finished", 00:24:15.948 "queue_depth": 128, 00:24:15.948 "io_size": 4096, 00:24:15.948 "runtime": 1.009732, 00:24:15.948 "iops": 18491.04514861369, 00:24:15.948 "mibps": 72.23064511177223, 00:24:15.948 "io_failed": 0, 00:24:15.948 "io_timeout": 0, 00:24:15.948 "avg_latency_us": 6911.144284997331, 00:24:15.948 "min_latency_us": 2463.6681481481482, 00:24:15.948 "max_latency_us": 12427.567407407407 00:24:15.948 } 00:24:15.948 ], 00:24:15.948 "core_count": 1 00:24:15.948 } 00:24:15.948 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:15.948 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.948 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.948 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.948 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:15.948 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2598052 00:24:15.948 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2598052 ']' 00:24:15.948 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2598052 00:24:15.948 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:15.948 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.948 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2598052 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2598052' 00:24:15.949 killing process with pid 2598052 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2598052 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2598052 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:15.949 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:15.949 [2024-12-09 10:34:46.096351] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:24:15.949 [2024-12-09 10:34:46.096450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598052 ] 00:24:15.949 [2024-12-09 10:34:46.163445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.949 [2024-12-09 10:34:46.222594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.949 [2024-12-09 10:34:46.861663] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 565c51d9-172b-469d-bf1e-052dd096fefd already exists 00:24:15.949 [2024-12-09 10:34:46.861699] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:565c51d9-172b-469d-bf1e-052dd096fefd alias for bdev NVMe1n1 00:24:15.949 [2024-12-09 10:34:46.861724] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:15.949 Running I/O for 1 seconds... 00:24:15.949 18416.00 IOPS, 71.94 MiB/s 00:24:15.949 Latency(us) 00:24:15.949 [2024-12-09T09:34:48.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.949 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:15.949 NVMe0n1 : 1.01 18491.05 72.23 0.00 0.00 6911.14 2463.67 12427.57 00:24:15.949 [2024-12-09T09:34:48.390Z] =================================================================================================================== 00:24:15.949 [2024-12-09T09:34:48.390Z] Total : 18491.05 72.23 0.00 0.00 6911.14 2463.67 12427.57 00:24:15.949 Received shutdown signal, test time was about 1.000000 seconds 00:24:15.949 00:24:15.949 Latency(us) 00:24:15.949 [2024-12-09T09:34:48.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.949 [2024-12-09T09:34:48.390Z] =================================================================================================================== 00:24:15.949 [2024-12-09T09:34:48.390Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.949 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.949 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:15.949 rmmod nvme_tcp 00:24:15.949 rmmod nvme_fabrics 00:24:16.207 rmmod nvme_keyring 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2598025 ']' 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2598025 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2598025 ']' 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2598025 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2598025 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2598025' 00:24:16.207 killing process with pid 2598025 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2598025 00:24:16.207 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2598025 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.465 10:34:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.368 10:34:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:18.368 00:24:18.368 real 0m7.658s 00:24:18.368 user 0m11.960s 00:24:18.368 sys 0m2.425s 00:24:18.368 10:34:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:18.368 10:34:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.368 ************************************ 00:24:18.368 END TEST nvmf_multicontroller 00:24:18.368 ************************************ 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.653 ************************************ 00:24:18.653 START TEST nvmf_aer 00:24:18.653 ************************************ 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:18.653 * Looking for test storage... 00:24:18.653 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:18.653 10:34:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:18.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.653 --rc genhtml_branch_coverage=1 00:24:18.653 --rc genhtml_function_coverage=1 00:24:18.653 --rc genhtml_legend=1 00:24:18.653 --rc geninfo_all_blocks=1 00:24:18.653 --rc geninfo_unexecuted_blocks=1 00:24:18.653 00:24:18.653 ' 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:18.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.653 --rc genhtml_branch_coverage=1 00:24:18.653 --rc genhtml_function_coverage=1 00:24:18.653 --rc genhtml_legend=1 00:24:18.653 --rc geninfo_all_blocks=1 00:24:18.653 --rc geninfo_unexecuted_blocks=1 00:24:18.653 00:24:18.653 ' 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:18.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.653 --rc genhtml_branch_coverage=1 00:24:18.653 --rc genhtml_function_coverage=1 00:24:18.653 --rc genhtml_legend=1 00:24:18.653 --rc geninfo_all_blocks=1 00:24:18.653 --rc geninfo_unexecuted_blocks=1 00:24:18.653 00:24:18.653 ' 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:18.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:18.653 --rc genhtml_branch_coverage=1 00:24:18.653 --rc genhtml_function_coverage=1 00:24:18.653 --rc genhtml_legend=1 00:24:18.653 --rc geninfo_all_blocks=1 00:24:18.653 --rc geninfo_unexecuted_blocks=1 00:24:18.653 00:24:18.653 ' 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:18.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:18.653 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.654 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:18.654 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.654 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:18.654 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:18.654 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:18.654 10:34:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:21.185 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:21.185 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.185 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:21.186 Found net devices under 0000:09:00.0: cvl_0_0 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:21.186 Found net devices under 0000:09:00.1: cvl_0_1 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:21.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:24:21.186 00:24:21.186 --- 10.0.0.2 ping statistics --- 00:24:21.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.186 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:24:21.186 00:24:21.186 --- 10.0.0.1 ping statistics --- 00:24:21.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.186 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2600312 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2600312 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2600312 ']' 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.186 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.186 [2024-12-09 10:34:53.388319] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:24:21.186 [2024-12-09 10:34:53.388398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:21.186 [2024-12-09 10:34:53.459829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:21.186 [2024-12-09 10:34:53.514751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:21.186 [2024-12-09 10:34:53.514809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:21.186 [2024-12-09 10:34:53.514837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:21.186 [2024-12-09 10:34:53.514849] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:21.186 [2024-12-09 10:34:53.514858] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:21.186 [2024-12-09 10:34:53.516469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.186 [2024-12-09 10:34:53.516547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:21.186 [2024-12-09 10:34:53.516612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:21.186 [2024-12-09 10:34:53.516615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.445 [2024-12-09 10:34:53.666761] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.445 Malloc0 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.445 [2024-12-09 10:34:53.736565] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.445 [ 00:24:21.445 { 00:24:21.445 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:21.445 "subtype": "Discovery", 00:24:21.445 "listen_addresses": [], 00:24:21.445 "allow_any_host": true, 00:24:21.445 "hosts": [] 00:24:21.445 }, 00:24:21.445 { 00:24:21.445 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.445 "subtype": "NVMe", 00:24:21.445 "listen_addresses": [ 00:24:21.445 { 00:24:21.445 "trtype": "TCP", 00:24:21.445 "adrfam": "IPv4", 00:24:21.445 "traddr": "10.0.0.2", 00:24:21.445 "trsvcid": "4420" 00:24:21.445 } 00:24:21.445 ], 00:24:21.445 "allow_any_host": true, 00:24:21.445 "hosts": [], 00:24:21.445 "serial_number": "SPDK00000000000001", 00:24:21.445 "model_number": "SPDK bdev Controller", 00:24:21.445 "max_namespaces": 2, 00:24:21.445 "min_cntlid": 1, 00:24:21.445 "max_cntlid": 65519, 00:24:21.445 "namespaces": [ 00:24:21.445 { 00:24:21.445 "nsid": 1, 00:24:21.445 "bdev_name": "Malloc0", 00:24:21.445 "name": "Malloc0", 00:24:21.445 "nguid": "5A79430CC4F44BD9B6C3069C459D2DCC", 00:24:21.445 "uuid": "5a79430c-c4f4-4bd9-b6c3-069c459d2dcc" 00:24:21.445 } 00:24:21.445 ] 00:24:21.445 } 00:24:21.445 ] 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2600425 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:21.445 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:21.703 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:21.703 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:21.703 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:21.703 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:21.703 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.703 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 Malloc1 00:24:21.703 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.703 10:34:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 [ 00:24:21.703 { 00:24:21.703 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:21.703 "subtype": "Discovery", 00:24:21.703 "listen_addresses": [], 00:24:21.703 "allow_any_host": true, 00:24:21.703 "hosts": [] 00:24:21.703 }, 00:24:21.703 { 00:24:21.703 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.703 "subtype": "NVMe", 00:24:21.703 "listen_addresses": [ 00:24:21.703 { 00:24:21.703 "trtype": "TCP", 00:24:21.703 "adrfam": "IPv4", 00:24:21.703 "traddr": "10.0.0.2", 00:24:21.703 "trsvcid": "4420" 00:24:21.703 } 00:24:21.703 ], 00:24:21.703 "allow_any_host": true, 00:24:21.703 "hosts": [], 00:24:21.703 "serial_number": "SPDK00000000000001", 00:24:21.703 "model_number": "SPDK bdev Controller", 00:24:21.703 "max_namespaces": 2, 00:24:21.703 "min_cntlid": 1, 00:24:21.703 "max_cntlid": 65519, 00:24:21.703 "namespaces": [ 00:24:21.703 { 00:24:21.703 "nsid": 1, 00:24:21.703 "bdev_name": "Malloc0", 00:24:21.703 "name": "Malloc0", 00:24:21.703 "nguid": "5A79430CC4F44BD9B6C3069C459D2DCC", 00:24:21.703 "uuid": "5a79430c-c4f4-4bd9-b6c3-069c459d2dcc" 00:24:21.703 }, 00:24:21.703 { 00:24:21.703 "nsid": 2, 00:24:21.703 "bdev_name": "Malloc1", 00:24:21.703 "name": "Malloc1", 00:24:21.703 "nguid": "2EA79D49FEB641DF8CC0BB3C28F7B03C", 00:24:21.703 "uuid": "2ea79d49-feb6-41df-8cc0-bb3c28f7b03c" 00:24:21.703 } 00:24:21.703 ] 00:24:21.703 } 00:24:21.703 ] 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2600425 00:24:21.703 Asynchronous Event Request test 00:24:21.703 Attaching to 10.0.0.2 00:24:21.703 Attached to 10.0.0.2 00:24:21.703 Registering asynchronous event callbacks... 00:24:21.703 Starting namespace attribute notice tests for all controllers... 00:24:21.703 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:21.703 aer_cb - Changed Namespace 00:24:21.703 Cleaning up... 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.703 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.703 rmmod nvme_tcp 00:24:21.703 rmmod nvme_fabrics 00:24:21.703 rmmod nvme_keyring 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2600312 ']' 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2600312 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2600312 ']' 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2600312 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2600312 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2600312' 00:24:21.961 killing process with pid 2600312 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2600312 00:24:21.961 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2600312 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.221 10:34:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.128 10:34:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:24.128 00:24:24.128 real 0m5.646s 00:24:24.128 user 0m4.477s 00:24:24.128 sys 0m2.005s 00:24:24.128 10:34:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:24.128 10:34:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:24.128 ************************************ 00:24:24.128 END TEST nvmf_aer 00:24:24.128 ************************************ 00:24:24.128 10:34:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:24.128 10:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:24.128 10:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:24.128 10:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.128 ************************************ 00:24:24.128 START TEST nvmf_async_init 00:24:24.128 ************************************ 00:24:24.128 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:24.387 * Looking for test storage... 00:24:24.387 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:24.387 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:24.387 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:24:24.387 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:24.387 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:24.387 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.387 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.388 --rc genhtml_branch_coverage=1 00:24:24.388 --rc genhtml_function_coverage=1 00:24:24.388 --rc genhtml_legend=1 00:24:24.388 --rc geninfo_all_blocks=1 00:24:24.388 --rc geninfo_unexecuted_blocks=1 00:24:24.388 00:24:24.388 ' 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.388 --rc genhtml_branch_coverage=1 00:24:24.388 --rc genhtml_function_coverage=1 00:24:24.388 --rc genhtml_legend=1 00:24:24.388 --rc geninfo_all_blocks=1 00:24:24.388 --rc geninfo_unexecuted_blocks=1 00:24:24.388 00:24:24.388 ' 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.388 --rc genhtml_branch_coverage=1 00:24:24.388 --rc genhtml_function_coverage=1 00:24:24.388 --rc genhtml_legend=1 00:24:24.388 --rc geninfo_all_blocks=1 00:24:24.388 --rc geninfo_unexecuted_blocks=1 00:24:24.388 00:24:24.388 ' 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:24.388 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.388 --rc genhtml_branch_coverage=1 00:24:24.388 --rc genhtml_function_coverage=1 00:24:24.388 --rc genhtml_legend=1 00:24:24.388 --rc geninfo_all_blocks=1 00:24:24.388 --rc geninfo_unexecuted_blocks=1 00:24:24.388 00:24:24.388 ' 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.388 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=25c6a4aeea594048aa40e2b7ad3d790e 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.388 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:24.389 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:24.389 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:24.389 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.389 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.389 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.389 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:24.389 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:24.389 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:24.389 10:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.920 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.920 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:26.921 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:26.921 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:26.921 Found net devices under 0000:09:00.0: cvl_0_0 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:26.921 Found net devices under 0000:09:00.1: cvl_0_1 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:26.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:24:26.921 00:24:26.921 --- 10.0.0.2 ping statistics --- 00:24:26.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.921 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:24:26.921 00:24:26.921 --- 10.0.0.1 ping statistics --- 00:24:26.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.921 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2602371 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2602371 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2602371 ']' 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:26.921 10:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.921 [2024-12-09 10:34:59.032989] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:24:26.921 [2024-12-09 10:34:59.033067] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.921 [2024-12-09 10:34:59.106200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:26.921 [2024-12-09 10:34:59.163107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.921 [2024-12-09 10:34:59.163186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.921 [2024-12-09 10:34:59.163218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.921 [2024-12-09 10:34:59.163236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.921 [2024-12-09 10:34:59.163263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.921 [2024-12-09 10:34:59.163878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.921 [2024-12-09 10:34:59.303711] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.921 null0 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 25c6a4aeea594048aa40e2b7ad3d790e 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:26.921 [2024-12-09 10:34:59.343977] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.921 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:26.922 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.922 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.179 nvme0n1 00:24:27.179 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.179 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:27.179 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.179 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.179 [ 00:24:27.179 { 00:24:27.179 "name": "nvme0n1", 00:24:27.179 "aliases": [ 00:24:27.179 "25c6a4ae-ea59-4048-aa40-e2b7ad3d790e" 00:24:27.179 ], 00:24:27.179 "product_name": "NVMe disk", 00:24:27.179 "block_size": 512, 00:24:27.179 "num_blocks": 2097152, 00:24:27.179 "uuid": "25c6a4ae-ea59-4048-aa40-e2b7ad3d790e", 00:24:27.179 "numa_id": 0, 00:24:27.179 "assigned_rate_limits": { 00:24:27.179 "rw_ios_per_sec": 0, 00:24:27.179 "rw_mbytes_per_sec": 0, 00:24:27.179 "r_mbytes_per_sec": 0, 00:24:27.179 "w_mbytes_per_sec": 0 00:24:27.179 }, 00:24:27.179 "claimed": false, 00:24:27.179 "zoned": false, 00:24:27.179 "supported_io_types": { 00:24:27.179 "read": true, 00:24:27.179 "write": true, 00:24:27.179 "unmap": false, 00:24:27.179 "flush": true, 00:24:27.179 "reset": true, 00:24:27.179 "nvme_admin": true, 00:24:27.179 "nvme_io": true, 00:24:27.179 "nvme_io_md": false, 00:24:27.179 "write_zeroes": true, 00:24:27.179 "zcopy": false, 00:24:27.179 "get_zone_info": false, 00:24:27.179 "zone_management": false, 00:24:27.179 "zone_append": false, 00:24:27.179 "compare": true, 00:24:27.179 "compare_and_write": true, 00:24:27.179 "abort": true, 00:24:27.179 "seek_hole": false, 00:24:27.179 "seek_data": false, 00:24:27.179 "copy": true, 00:24:27.179 "nvme_iov_md": false 00:24:27.179 }, 00:24:27.179 "memory_domains": [ 00:24:27.179 { 00:24:27.179 "dma_device_id": "system", 00:24:27.179 "dma_device_type": 1 00:24:27.179 } 00:24:27.179 ], 00:24:27.179 "driver_specific": { 00:24:27.179 "nvme": [ 00:24:27.179 { 00:24:27.179 "trid": { 00:24:27.179 "trtype": "TCP", 00:24:27.179 "adrfam": "IPv4", 00:24:27.179 "traddr": "10.0.0.2", 00:24:27.179 "trsvcid": "4420", 00:24:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:27.179 }, 00:24:27.179 "ctrlr_data": { 00:24:27.179 "cntlid": 1, 00:24:27.179 "vendor_id": "0x8086", 00:24:27.179 "model_number": "SPDK bdev Controller", 00:24:27.179 "serial_number": "00000000000000000000", 00:24:27.179 "firmware_revision": "25.01", 00:24:27.179 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:27.179 "oacs": { 00:24:27.179 "security": 0, 00:24:27.179 "format": 0, 00:24:27.179 "firmware": 0, 00:24:27.179 "ns_manage": 0 00:24:27.179 }, 00:24:27.179 "multi_ctrlr": true, 00:24:27.179 "ana_reporting": false 00:24:27.179 }, 00:24:27.179 "vs": { 00:24:27.179 "nvme_version": "1.3" 00:24:27.179 }, 00:24:27.179 "ns_data": { 00:24:27.179 "id": 1, 00:24:27.179 "can_share": true 00:24:27.179 } 00:24:27.179 } 00:24:27.179 ], 00:24:27.179 "mp_policy": "active_passive" 00:24:27.179 } 00:24:27.179 } 00:24:27.179 ] 00:24:27.179 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.179 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:27.179 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.179 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.179 [2024-12-09 10:34:59.596523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:27.179 [2024-12-09 10:34:59.596605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24ec740 (9): Bad file descriptor 00:24:27.436 [2024-12-09 10:34:59.738262] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:27.436 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.436 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:27.436 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.436 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.436 [ 00:24:27.436 { 00:24:27.436 "name": "nvme0n1", 00:24:27.436 "aliases": [ 00:24:27.437 "25c6a4ae-ea59-4048-aa40-e2b7ad3d790e" 00:24:27.437 ], 00:24:27.437 "product_name": "NVMe disk", 00:24:27.437 "block_size": 512, 00:24:27.437 "num_blocks": 2097152, 00:24:27.437 "uuid": "25c6a4ae-ea59-4048-aa40-e2b7ad3d790e", 00:24:27.437 "numa_id": 0, 00:24:27.437 "assigned_rate_limits": { 00:24:27.437 "rw_ios_per_sec": 0, 00:24:27.437 "rw_mbytes_per_sec": 0, 00:24:27.437 "r_mbytes_per_sec": 0, 00:24:27.437 "w_mbytes_per_sec": 0 00:24:27.437 }, 00:24:27.437 "claimed": false, 00:24:27.437 "zoned": false, 00:24:27.437 "supported_io_types": { 00:24:27.437 "read": true, 00:24:27.437 "write": true, 00:24:27.437 "unmap": false, 00:24:27.437 "flush": true, 00:24:27.437 "reset": true, 00:24:27.437 "nvme_admin": true, 00:24:27.437 "nvme_io": true, 00:24:27.437 "nvme_io_md": false, 00:24:27.437 "write_zeroes": true, 00:24:27.437 "zcopy": false, 00:24:27.437 "get_zone_info": false, 00:24:27.437 "zone_management": false, 00:24:27.437 "zone_append": false, 00:24:27.437 "compare": true, 00:24:27.437 "compare_and_write": true, 00:24:27.437 "abort": true, 00:24:27.437 "seek_hole": false, 00:24:27.437 "seek_data": false, 00:24:27.437 "copy": true, 00:24:27.437 "nvme_iov_md": false 00:24:27.437 }, 00:24:27.437 "memory_domains": [ 00:24:27.437 { 00:24:27.437 "dma_device_id": "system", 00:24:27.437 "dma_device_type": 1 00:24:27.437 } 00:24:27.437 ], 00:24:27.437 "driver_specific": { 00:24:27.437 "nvme": [ 00:24:27.437 { 00:24:27.437 "trid": { 00:24:27.437 "trtype": "TCP", 00:24:27.437 "adrfam": "IPv4", 00:24:27.437 "traddr": "10.0.0.2", 00:24:27.437 "trsvcid": "4420", 00:24:27.437 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:27.437 }, 00:24:27.437 "ctrlr_data": { 00:24:27.437 "cntlid": 2, 00:24:27.437 "vendor_id": "0x8086", 00:24:27.437 "model_number": "SPDK bdev Controller", 00:24:27.437 "serial_number": "00000000000000000000", 00:24:27.437 "firmware_revision": "25.01", 00:24:27.437 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:27.437 "oacs": { 00:24:27.437 "security": 0, 00:24:27.437 "format": 0, 00:24:27.437 "firmware": 0, 00:24:27.437 "ns_manage": 0 00:24:27.437 }, 00:24:27.437 "multi_ctrlr": true, 00:24:27.437 "ana_reporting": false 00:24:27.437 }, 00:24:27.437 "vs": { 00:24:27.437 "nvme_version": "1.3" 00:24:27.437 }, 00:24:27.437 "ns_data": { 00:24:27.437 "id": 1, 00:24:27.437 "can_share": true 00:24:27.437 } 00:24:27.437 } 00:24:27.437 ], 00:24:27.437 "mp_policy": "active_passive" 00:24:27.437 } 00:24:27.437 } 00:24:27.437 ] 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.gVJPVevPn5 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.gVJPVevPn5 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.gVJPVevPn5 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.437 [2024-12-09 10:34:59.793171] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:27.437 [2024-12-09 10:34:59.793303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.437 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.437 [2024-12-09 10:34:59.809207] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:27.695 nvme0n1 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.695 [ 00:24:27.695 { 00:24:27.695 "name": "nvme0n1", 00:24:27.695 "aliases": [ 00:24:27.695 "25c6a4ae-ea59-4048-aa40-e2b7ad3d790e" 00:24:27.695 ], 00:24:27.695 "product_name": "NVMe disk", 00:24:27.695 "block_size": 512, 00:24:27.695 "num_blocks": 2097152, 00:24:27.695 "uuid": "25c6a4ae-ea59-4048-aa40-e2b7ad3d790e", 00:24:27.695 "numa_id": 0, 00:24:27.695 "assigned_rate_limits": { 00:24:27.695 "rw_ios_per_sec": 0, 00:24:27.695 "rw_mbytes_per_sec": 0, 00:24:27.695 "r_mbytes_per_sec": 0, 00:24:27.695 "w_mbytes_per_sec": 0 00:24:27.695 }, 00:24:27.695 "claimed": false, 00:24:27.695 "zoned": false, 00:24:27.695 "supported_io_types": { 00:24:27.695 "read": true, 00:24:27.695 "write": true, 00:24:27.695 "unmap": false, 00:24:27.695 "flush": true, 00:24:27.695 "reset": true, 00:24:27.695 "nvme_admin": true, 00:24:27.695 "nvme_io": true, 00:24:27.695 "nvme_io_md": false, 00:24:27.695 "write_zeroes": true, 00:24:27.695 "zcopy": false, 00:24:27.695 "get_zone_info": false, 00:24:27.695 "zone_management": false, 00:24:27.695 "zone_append": false, 00:24:27.695 "compare": true, 00:24:27.695 "compare_and_write": true, 00:24:27.695 "abort": true, 00:24:27.695 "seek_hole": false, 00:24:27.695 "seek_data": false, 00:24:27.695 "copy": true, 00:24:27.695 "nvme_iov_md": false 00:24:27.695 }, 00:24:27.695 "memory_domains": [ 00:24:27.695 { 00:24:27.695 "dma_device_id": "system", 00:24:27.695 "dma_device_type": 1 00:24:27.695 } 00:24:27.695 ], 00:24:27.695 "driver_specific": { 00:24:27.695 "nvme": [ 00:24:27.695 { 00:24:27.695 "trid": { 00:24:27.695 "trtype": "TCP", 00:24:27.695 "adrfam": "IPv4", 00:24:27.695 "traddr": "10.0.0.2", 00:24:27.695 "trsvcid": "4421", 00:24:27.695 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:27.695 }, 00:24:27.695 "ctrlr_data": { 00:24:27.695 "cntlid": 3, 00:24:27.695 "vendor_id": "0x8086", 00:24:27.695 "model_number": "SPDK bdev Controller", 00:24:27.695 "serial_number": "00000000000000000000", 00:24:27.695 "firmware_revision": "25.01", 00:24:27.695 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:27.695 "oacs": { 00:24:27.695 "security": 0, 00:24:27.695 "format": 0, 00:24:27.695 "firmware": 0, 00:24:27.695 "ns_manage": 0 00:24:27.695 }, 00:24:27.695 "multi_ctrlr": true, 00:24:27.695 "ana_reporting": false 00:24:27.695 }, 00:24:27.695 "vs": { 00:24:27.695 "nvme_version": "1.3" 00:24:27.695 }, 00:24:27.695 "ns_data": { 00:24:27.695 "id": 1, 00:24:27.695 "can_share": true 00:24:27.695 } 00:24:27.695 } 00:24:27.695 ], 00:24:27.695 "mp_policy": "active_passive" 00:24:27.695 } 00:24:27.695 } 00:24:27.695 ] 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.gVJPVevPn5 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:27.695 rmmod nvme_tcp 00:24:27.695 rmmod nvme_fabrics 00:24:27.695 rmmod nvme_keyring 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:27.695 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2602371 ']' 00:24:27.696 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2602371 00:24:27.696 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2602371 ']' 00:24:27.696 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2602371 00:24:27.696 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:24:27.696 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:27.696 10:34:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2602371 00:24:27.696 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:27.696 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:27.696 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2602371' 00:24:27.696 killing process with pid 2602371 00:24:27.696 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2602371 00:24:27.696 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2602371 00:24:27.955 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:27.955 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:27.955 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:27.955 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:27.955 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:27.955 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:27.955 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:27.956 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:27.956 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:27.956 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.956 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.956 10:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.862 10:35:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:29.862 00:24:29.862 real 0m5.737s 00:24:29.862 user 0m2.217s 00:24:29.862 sys 0m1.964s 00:24:29.862 10:35:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:29.862 10:35:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:29.862 ************************************ 00:24:29.862 END TEST nvmf_async_init 00:24:29.862 ************************************ 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.121 ************************************ 00:24:30.121 START TEST dma 00:24:30.121 ************************************ 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:30.121 * Looking for test storage... 00:24:30.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:30.121 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:30.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.122 --rc genhtml_branch_coverage=1 00:24:30.122 --rc genhtml_function_coverage=1 00:24:30.122 --rc genhtml_legend=1 00:24:30.122 --rc geninfo_all_blocks=1 00:24:30.122 --rc geninfo_unexecuted_blocks=1 00:24:30.122 00:24:30.122 ' 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:30.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.122 --rc genhtml_branch_coverage=1 00:24:30.122 --rc genhtml_function_coverage=1 00:24:30.122 --rc genhtml_legend=1 00:24:30.122 --rc geninfo_all_blocks=1 00:24:30.122 --rc geninfo_unexecuted_blocks=1 00:24:30.122 00:24:30.122 ' 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:30.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.122 --rc genhtml_branch_coverage=1 00:24:30.122 --rc genhtml_function_coverage=1 00:24:30.122 --rc genhtml_legend=1 00:24:30.122 --rc geninfo_all_blocks=1 00:24:30.122 --rc geninfo_unexecuted_blocks=1 00:24:30.122 00:24:30.122 ' 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:30.122 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.122 --rc genhtml_branch_coverage=1 00:24:30.122 --rc genhtml_function_coverage=1 00:24:30.122 --rc genhtml_legend=1 00:24:30.122 --rc geninfo_all_blocks=1 00:24:30.122 --rc geninfo_unexecuted_blocks=1 00:24:30.122 00:24:30.122 ' 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:30.122 00:24:30.122 real 0m0.170s 00:24:30.122 user 0m0.120s 00:24:30.122 sys 0m0.059s 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:30.122 ************************************ 00:24:30.122 END TEST dma 00:24:30.122 ************************************ 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:30.122 10:35:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.382 ************************************ 00:24:30.382 START TEST nvmf_identify 00:24:30.382 ************************************ 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:30.382 * Looking for test storage... 00:24:30.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:30.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.382 --rc genhtml_branch_coverage=1 00:24:30.382 --rc genhtml_function_coverage=1 00:24:30.382 --rc genhtml_legend=1 00:24:30.382 --rc geninfo_all_blocks=1 00:24:30.382 --rc geninfo_unexecuted_blocks=1 00:24:30.382 00:24:30.382 ' 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:30.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.382 --rc genhtml_branch_coverage=1 00:24:30.382 --rc genhtml_function_coverage=1 00:24:30.382 --rc genhtml_legend=1 00:24:30.382 --rc geninfo_all_blocks=1 00:24:30.382 --rc geninfo_unexecuted_blocks=1 00:24:30.382 00:24:30.382 ' 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:30.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.382 --rc genhtml_branch_coverage=1 00:24:30.382 --rc genhtml_function_coverage=1 00:24:30.382 --rc genhtml_legend=1 00:24:30.382 --rc geninfo_all_blocks=1 00:24:30.382 --rc geninfo_unexecuted_blocks=1 00:24:30.382 00:24:30.382 ' 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:30.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:30.382 --rc genhtml_branch_coverage=1 00:24:30.382 --rc genhtml_function_coverage=1 00:24:30.382 --rc genhtml_legend=1 00:24:30.382 --rc geninfo_all_blocks=1 00:24:30.382 --rc geninfo_unexecuted_blocks=1 00:24:30.382 00:24:30.382 ' 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.382 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:30.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:30.383 10:35:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.916 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.916 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:32.916 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:32.916 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:32.916 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:32.916 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:32.916 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:32.917 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:32.917 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:32.917 Found net devices under 0000:09:00.0: cvl_0_0 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:32.917 Found net devices under 0000:09:00.1: cvl_0_1 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:32.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:24:32.917 00:24:32.917 --- 10.0.0.2 ping statistics --- 00:24:32.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.917 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:24:32.917 00:24:32.917 --- 10.0.0.1 ping statistics --- 00:24:32.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.917 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2604627 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2604627 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2604627 ']' 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.917 10:35:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.917 [2024-12-09 10:35:04.997416] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:24:32.917 [2024-12-09 10:35:04.997520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.917 [2024-12-09 10:35:05.068942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:32.917 [2024-12-09 10:35:05.130014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.917 [2024-12-09 10:35:05.130077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.917 [2024-12-09 10:35:05.130101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.917 [2024-12-09 10:35:05.130112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.917 [2024-12-09 10:35:05.130137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.917 [2024-12-09 10:35:05.131750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.917 [2024-12-09 10:35:05.131821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.917 [2024-12-09 10:35:05.131881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.917 [2024-12-09 10:35:05.131884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.917 [2024-12-09 10:35:05.260081] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.917 Malloc0 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.917 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:32.918 [2024-12-09 10:35:05.349558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.918 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.918 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:32.918 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.918 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.176 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.176 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:33.176 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.176 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.176 [ 00:24:33.176 { 00:24:33.176 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:33.176 "subtype": "Discovery", 00:24:33.176 "listen_addresses": [ 00:24:33.176 { 00:24:33.176 "trtype": "TCP", 00:24:33.176 "adrfam": "IPv4", 00:24:33.176 "traddr": "10.0.0.2", 00:24:33.176 "trsvcid": "4420" 00:24:33.176 } 00:24:33.176 ], 00:24:33.176 "allow_any_host": true, 00:24:33.176 "hosts": [] 00:24:33.176 }, 00:24:33.176 { 00:24:33.176 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.176 "subtype": "NVMe", 00:24:33.176 "listen_addresses": [ 00:24:33.176 { 00:24:33.176 "trtype": "TCP", 00:24:33.176 "adrfam": "IPv4", 00:24:33.176 "traddr": "10.0.0.2", 00:24:33.176 "trsvcid": "4420" 00:24:33.176 } 00:24:33.176 ], 00:24:33.176 "allow_any_host": true, 00:24:33.176 "hosts": [], 00:24:33.176 "serial_number": "SPDK00000000000001", 00:24:33.176 "model_number": "SPDK bdev Controller", 00:24:33.176 "max_namespaces": 32, 00:24:33.176 "min_cntlid": 1, 00:24:33.176 "max_cntlid": 65519, 00:24:33.176 "namespaces": [ 00:24:33.176 { 00:24:33.176 "nsid": 1, 00:24:33.176 "bdev_name": "Malloc0", 00:24:33.176 "name": "Malloc0", 00:24:33.176 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:33.176 "eui64": "ABCDEF0123456789", 00:24:33.176 "uuid": "649803ea-2031-4e78-bca5-5b6f47797143" 00:24:33.176 } 00:24:33.176 ] 00:24:33.176 } 00:24:33.176 ] 00:24:33.176 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.176 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:33.176 [2024-12-09 10:35:05.392438] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:24:33.176 [2024-12-09 10:35:05.392481] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604662 ] 00:24:33.176 [2024-12-09 10:35:05.443255] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:33.176 [2024-12-09 10:35:05.443321] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:33.176 [2024-12-09 10:35:05.443332] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:33.176 [2024-12-09 10:35:05.443354] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:33.176 [2024-12-09 10:35:05.443374] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:33.176 [2024-12-09 10:35:05.447605] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:33.176 [2024-12-09 10:35:05.447668] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xecd690 0 00:24:33.176 [2024-12-09 10:35:05.447871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:33.176 [2024-12-09 10:35:05.447889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:33.176 [2024-12-09 10:35:05.447903] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:33.176 [2024-12-09 10:35:05.447910] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:33.176 [2024-12-09 10:35:05.447957] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.176 [2024-12-09 10:35:05.447969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.176 [2024-12-09 10:35:05.447976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecd690) 00:24:33.176 [2024-12-09 10:35:05.447993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:33.176 [2024-12-09 10:35:05.448018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f100, cid 0, qid 0 00:24:33.176 [2024-12-09 10:35:05.454157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.176 [2024-12-09 10:35:05.454176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.176 [2024-12-09 10:35:05.454183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.176 [2024-12-09 10:35:05.454191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f100) on tqpair=0xecd690 00:24:33.176 [2024-12-09 10:35:05.454205] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:33.176 [2024-12-09 10:35:05.454217] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:33.176 [2024-12-09 10:35:05.454226] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:33.176 [2024-12-09 10:35:05.454249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.176 [2024-12-09 10:35:05.454258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.176 [2024-12-09 10:35:05.454264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecd690) 00:24:33.176 [2024-12-09 10:35:05.454275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.176 [2024-12-09 10:35:05.454298] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f100, cid 0, qid 0 00:24:33.176 [2024-12-09 10:35:05.454439] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.176 [2024-12-09 10:35:05.454451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.176 [2024-12-09 10:35:05.454458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.176 [2024-12-09 10:35:05.454465] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f100) on tqpair=0xecd690 00:24:33.176 [2024-12-09 10:35:05.454479] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:33.177 [2024-12-09 10:35:05.454493] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:33.177 [2024-12-09 10:35:05.454506] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.454521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.454528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecd690) 00:24:33.177 [2024-12-09 10:35:05.454538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.177 [2024-12-09 10:35:05.454560] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f100, cid 0, qid 0 00:24:33.177 [2024-12-09 10:35:05.454684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.177 [2024-12-09 10:35:05.454696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.177 [2024-12-09 10:35:05.454703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.454709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f100) on tqpair=0xecd690 00:24:33.177 [2024-12-09 10:35:05.454718] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:33.177 [2024-12-09 10:35:05.454732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:33.177 [2024-12-09 10:35:05.454744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.454751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.454757] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecd690) 00:24:33.177 [2024-12-09 10:35:05.454767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.177 [2024-12-09 10:35:05.454788] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f100, cid 0, qid 0 00:24:33.177 [2024-12-09 10:35:05.454861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.177 [2024-12-09 10:35:05.454873] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.177 [2024-12-09 10:35:05.454880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.454887] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f100) on tqpair=0xecd690 00:24:33.177 [2024-12-09 10:35:05.454895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:33.177 [2024-12-09 10:35:05.454911] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.454920] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.454926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecd690) 00:24:33.177 [2024-12-09 10:35:05.454937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.177 [2024-12-09 10:35:05.454957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f100, cid 0, qid 0 00:24:33.177 [2024-12-09 10:35:05.455033] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.177 [2024-12-09 10:35:05.455047] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.177 [2024-12-09 10:35:05.455054] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.455061] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f100) on tqpair=0xecd690 00:24:33.177 [2024-12-09 10:35:05.455068] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:33.177 [2024-12-09 10:35:05.455077] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:33.177 [2024-12-09 10:35:05.455089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:33.177 [2024-12-09 10:35:05.455199] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:33.177 [2024-12-09 10:35:05.455215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:33.177 [2024-12-09 10:35:05.455230] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.455238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.455244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecd690) 00:24:33.177 [2024-12-09 10:35:05.455254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.177 [2024-12-09 10:35:05.455275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f100, cid 0, qid 0 00:24:33.177 [2024-12-09 10:35:05.455399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.177 [2024-12-09 10:35:05.455411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.177 [2024-12-09 10:35:05.455418] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.455425] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f100) on tqpair=0xecd690 00:24:33.177 [2024-12-09 10:35:05.455433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:33.177 [2024-12-09 10:35:05.455448] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.455457] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.455464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecd690) 00:24:33.177 [2024-12-09 10:35:05.455474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.177 [2024-12-09 10:35:05.455494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f100, cid 0, qid 0 00:24:33.177 [2024-12-09 10:35:05.455616] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.177 [2024-12-09 10:35:05.455628] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.177 [2024-12-09 10:35:05.455635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.455641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f100) on tqpair=0xecd690 00:24:33.177 [2024-12-09 10:35:05.455649] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:33.177 [2024-12-09 10:35:05.455657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:33.177 [2024-12-09 10:35:05.455669] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:33.177 [2024-12-09 10:35:05.455684] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:33.177 [2024-12-09 10:35:05.455699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.455707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecd690) 00:24:33.177 [2024-12-09 10:35:05.455718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.177 [2024-12-09 10:35:05.455738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f100, cid 0, qid 0 00:24:33.177 [2024-12-09 10:35:05.455864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.177 [2024-12-09 10:35:05.455879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.177 [2024-12-09 10:35:05.455886] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.455892] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xecd690): datao=0, datal=4096, cccid=0 00:24:33.177 [2024-12-09 10:35:05.455907] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf2f100) on tqpair(0xecd690): expected_datao=0, payload_size=4096 00:24:33.177 [2024-12-09 10:35:05.455916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.455934] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.455943] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.496256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.177 [2024-12-09 10:35:05.496275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.177 [2024-12-09 10:35:05.496283] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.177 [2024-12-09 10:35:05.496290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f100) on tqpair=0xecd690 00:24:33.177 [2024-12-09 10:35:05.496308] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:33.177 [2024-12-09 10:35:05.496318] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:33.178 [2024-12-09 10:35:05.496325] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:33.178 [2024-12-09 10:35:05.496334] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:33.178 [2024-12-09 10:35:05.496341] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:33.178 [2024-12-09 10:35:05.496349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:33.178 [2024-12-09 10:35:05.496364] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:33.178 [2024-12-09 10:35:05.496376] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecd690) 00:24:33.178 [2024-12-09 10:35:05.496401] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:33.178 [2024-12-09 10:35:05.496424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f100, cid 0, qid 0 00:24:33.178 [2024-12-09 10:35:05.496514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.178 [2024-12-09 10:35:05.496526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.178 [2024-12-09 10:35:05.496533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496539] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f100) on tqpair=0xecd690 00:24:33.178 [2024-12-09 10:35:05.496551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496564] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xecd690) 00:24:33.178 [2024-12-09 10:35:05.496573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.178 [2024-12-09 10:35:05.496583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496596] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xecd690) 00:24:33.178 [2024-12-09 10:35:05.496604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.178 [2024-12-09 10:35:05.496614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496620] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xecd690) 00:24:33.178 [2024-12-09 10:35:05.496640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.178 [2024-12-09 10:35:05.496650] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.178 [2024-12-09 10:35:05.496671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.178 [2024-12-09 10:35:05.496679] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:33.178 [2024-12-09 10:35:05.496698] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:33.178 [2024-12-09 10:35:05.496711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xecd690) 00:24:33.178 [2024-12-09 10:35:05.496728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.178 [2024-12-09 10:35:05.496750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f100, cid 0, qid 0 00:24:33.178 [2024-12-09 10:35:05.496761] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f280, cid 1, qid 0 00:24:33.178 [2024-12-09 10:35:05.496769] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f400, cid 2, qid 0 00:24:33.178 [2024-12-09 10:35:05.496776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.178 [2024-12-09 10:35:05.496783] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f700, cid 4, qid 0 00:24:33.178 [2024-12-09 10:35:05.496888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.178 [2024-12-09 10:35:05.496900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.178 [2024-12-09 10:35:05.496907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f700) on tqpair=0xecd690 00:24:33.178 [2024-12-09 10:35:05.496922] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:33.178 [2024-12-09 10:35:05.496930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:33.178 [2024-12-09 10:35:05.496948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.496957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xecd690) 00:24:33.178 [2024-12-09 10:35:05.496967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.178 [2024-12-09 10:35:05.496988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f700, cid 4, qid 0 00:24:33.178 [2024-12-09 10:35:05.497072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.178 [2024-12-09 10:35:05.497084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.178 [2024-12-09 10:35:05.497091] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.497097] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xecd690): datao=0, datal=4096, cccid=4 00:24:33.178 [2024-12-09 10:35:05.497104] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf2f700) on tqpair(0xecd690): expected_datao=0, payload_size=4096 00:24:33.178 [2024-12-09 10:35:05.497111] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.497127] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.501146] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.501165] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.178 [2024-12-09 10:35:05.501175] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.178 [2024-12-09 10:35:05.501182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.501188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f700) on tqpair=0xecd690 00:24:33.178 [2024-12-09 10:35:05.501206] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:33.178 [2024-12-09 10:35:05.501258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.501269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xecd690) 00:24:33.178 [2024-12-09 10:35:05.501279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.178 [2024-12-09 10:35:05.501290] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.501297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.178 [2024-12-09 10:35:05.501303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xecd690) 00:24:33.178 [2024-12-09 10:35:05.501311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.179 [2024-12-09 10:35:05.501338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f700, cid 4, qid 0 00:24:33.179 [2024-12-09 10:35:05.501364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f880, cid 5, qid 0 00:24:33.179 [2024-12-09 10:35:05.501508] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.179 [2024-12-09 10:35:05.501520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.179 [2024-12-09 10:35:05.501527] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.501533] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xecd690): datao=0, datal=1024, cccid=4 00:24:33.179 [2024-12-09 10:35:05.501540] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf2f700) on tqpair(0xecd690): expected_datao=0, payload_size=1024 00:24:33.179 [2024-12-09 10:35:05.501547] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.501557] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.501564] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.501572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.179 [2024-12-09 10:35:05.501581] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.179 [2024-12-09 10:35:05.501587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.501593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f880) on tqpair=0xecd690 00:24:33.179 [2024-12-09 10:35:05.542245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.179 [2024-12-09 10:35:05.542265] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.179 [2024-12-09 10:35:05.542273] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.542280] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f700) on tqpair=0xecd690 00:24:33.179 [2024-12-09 10:35:05.542297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.542307] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xecd690) 00:24:33.179 [2024-12-09 10:35:05.542318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.179 [2024-12-09 10:35:05.542347] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f700, cid 4, qid 0 00:24:33.179 [2024-12-09 10:35:05.542448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.179 [2024-12-09 10:35:05.542468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.179 [2024-12-09 10:35:05.542476] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.542483] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xecd690): datao=0, datal=3072, cccid=4 00:24:33.179 [2024-12-09 10:35:05.542490] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf2f700) on tqpair(0xecd690): expected_datao=0, payload_size=3072 00:24:33.179 [2024-12-09 10:35:05.542498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.542518] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.542527] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.587155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.179 [2024-12-09 10:35:05.587188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.179 [2024-12-09 10:35:05.587195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.587202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f700) on tqpair=0xecd690 00:24:33.179 [2024-12-09 10:35:05.587218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.587228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xecd690) 00:24:33.179 [2024-12-09 10:35:05.587239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.179 [2024-12-09 10:35:05.587268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f700, cid 4, qid 0 00:24:33.179 [2024-12-09 10:35:05.587358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.179 [2024-12-09 10:35:05.587370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.179 [2024-12-09 10:35:05.587377] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.587383] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xecd690): datao=0, datal=8, cccid=4 00:24:33.179 [2024-12-09 10:35:05.587391] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xf2f700) on tqpair(0xecd690): expected_datao=0, payload_size=8 00:24:33.179 [2024-12-09 10:35:05.587398] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.587408] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.179 [2024-12-09 10:35:05.587415] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.439 [2024-12-09 10:35:05.628229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.439 [2024-12-09 10:35:05.628248] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.439 [2024-12-09 10:35:05.628255] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.439 [2024-12-09 10:35:05.628263] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f700) on tqpair=0xecd690 00:24:33.439 ===================================================== 00:24:33.439 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:33.439 ===================================================== 00:24:33.439 Controller Capabilities/Features 00:24:33.439 ================================ 00:24:33.439 Vendor ID: 0000 00:24:33.439 Subsystem Vendor ID: 0000 00:24:33.439 Serial Number: .................... 00:24:33.439 Model Number: ........................................ 00:24:33.439 Firmware Version: 25.01 00:24:33.439 Recommended Arb Burst: 0 00:24:33.439 IEEE OUI Identifier: 00 00 00 00:24:33.439 Multi-path I/O 00:24:33.439 May have multiple subsystem ports: No 00:24:33.439 May have multiple controllers: No 00:24:33.439 Associated with SR-IOV VF: No 00:24:33.439 Max Data Transfer Size: 131072 00:24:33.439 Max Number of Namespaces: 0 00:24:33.439 Max Number of I/O Queues: 1024 00:24:33.439 NVMe Specification Version (VS): 1.3 00:24:33.439 NVMe Specification Version (Identify): 1.3 00:24:33.439 Maximum Queue Entries: 128 00:24:33.439 Contiguous Queues Required: Yes 00:24:33.439 Arbitration Mechanisms Supported 00:24:33.439 Weighted Round Robin: Not Supported 00:24:33.439 Vendor Specific: Not Supported 00:24:33.439 Reset Timeout: 15000 ms 00:24:33.439 Doorbell Stride: 4 bytes 00:24:33.439 NVM Subsystem Reset: Not Supported 00:24:33.439 Command Sets Supported 00:24:33.439 NVM Command Set: Supported 00:24:33.439 Boot Partition: Not Supported 00:24:33.439 Memory Page Size Minimum: 4096 bytes 00:24:33.439 Memory Page Size Maximum: 4096 bytes 00:24:33.439 Persistent Memory Region: Not Supported 00:24:33.439 Optional Asynchronous Events Supported 00:24:33.439 Namespace Attribute Notices: Not Supported 00:24:33.439 Firmware Activation Notices: Not Supported 00:24:33.439 ANA Change Notices: Not Supported 00:24:33.439 PLE Aggregate Log Change Notices: Not Supported 00:24:33.439 LBA Status Info Alert Notices: Not Supported 00:24:33.439 EGE Aggregate Log Change Notices: Not Supported 00:24:33.439 Normal NVM Subsystem Shutdown event: Not Supported 00:24:33.439 Zone Descriptor Change Notices: Not Supported 00:24:33.440 Discovery Log Change Notices: Supported 00:24:33.440 Controller Attributes 00:24:33.440 128-bit Host Identifier: Not Supported 00:24:33.440 Non-Operational Permissive Mode: Not Supported 00:24:33.440 NVM Sets: Not Supported 00:24:33.440 Read Recovery Levels: Not Supported 00:24:33.440 Endurance Groups: Not Supported 00:24:33.440 Predictable Latency Mode: Not Supported 00:24:33.440 Traffic Based Keep ALive: Not Supported 00:24:33.440 Namespace Granularity: Not Supported 00:24:33.440 SQ Associations: Not Supported 00:24:33.440 UUID List: Not Supported 00:24:33.440 Multi-Domain Subsystem: Not Supported 00:24:33.440 Fixed Capacity Management: Not Supported 00:24:33.440 Variable Capacity Management: Not Supported 00:24:33.440 Delete Endurance Group: Not Supported 00:24:33.440 Delete NVM Set: Not Supported 00:24:33.440 Extended LBA Formats Supported: Not Supported 00:24:33.440 Flexible Data Placement Supported: Not Supported 00:24:33.440 00:24:33.440 Controller Memory Buffer Support 00:24:33.440 ================================ 00:24:33.440 Supported: No 00:24:33.440 00:24:33.440 Persistent Memory Region Support 00:24:33.440 ================================ 00:24:33.440 Supported: No 00:24:33.440 00:24:33.440 Admin Command Set Attributes 00:24:33.440 ============================ 00:24:33.440 Security Send/Receive: Not Supported 00:24:33.440 Format NVM: Not Supported 00:24:33.440 Firmware Activate/Download: Not Supported 00:24:33.440 Namespace Management: Not Supported 00:24:33.440 Device Self-Test: Not Supported 00:24:33.440 Directives: Not Supported 00:24:33.440 NVMe-MI: Not Supported 00:24:33.440 Virtualization Management: Not Supported 00:24:33.440 Doorbell Buffer Config: Not Supported 00:24:33.440 Get LBA Status Capability: Not Supported 00:24:33.440 Command & Feature Lockdown Capability: Not Supported 00:24:33.440 Abort Command Limit: 1 00:24:33.440 Async Event Request Limit: 4 00:24:33.440 Number of Firmware Slots: N/A 00:24:33.440 Firmware Slot 1 Read-Only: N/A 00:24:33.440 Firmware Activation Without Reset: N/A 00:24:33.440 Multiple Update Detection Support: N/A 00:24:33.440 Firmware Update Granularity: No Information Provided 00:24:33.440 Per-Namespace SMART Log: No 00:24:33.440 Asymmetric Namespace Access Log Page: Not Supported 00:24:33.440 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:33.440 Command Effects Log Page: Not Supported 00:24:33.440 Get Log Page Extended Data: Supported 00:24:33.440 Telemetry Log Pages: Not Supported 00:24:33.440 Persistent Event Log Pages: Not Supported 00:24:33.440 Supported Log Pages Log Page: May Support 00:24:33.440 Commands Supported & Effects Log Page: Not Supported 00:24:33.440 Feature Identifiers & Effects Log Page:May Support 00:24:33.440 NVMe-MI Commands & Effects Log Page: May Support 00:24:33.440 Data Area 4 for Telemetry Log: Not Supported 00:24:33.440 Error Log Page Entries Supported: 128 00:24:33.440 Keep Alive: Not Supported 00:24:33.440 00:24:33.440 NVM Command Set Attributes 00:24:33.440 ========================== 00:24:33.440 Submission Queue Entry Size 00:24:33.440 Max: 1 00:24:33.440 Min: 1 00:24:33.440 Completion Queue Entry Size 00:24:33.440 Max: 1 00:24:33.440 Min: 1 00:24:33.440 Number of Namespaces: 0 00:24:33.440 Compare Command: Not Supported 00:24:33.440 Write Uncorrectable Command: Not Supported 00:24:33.440 Dataset Management Command: Not Supported 00:24:33.440 Write Zeroes Command: Not Supported 00:24:33.440 Set Features Save Field: Not Supported 00:24:33.440 Reservations: Not Supported 00:24:33.440 Timestamp: Not Supported 00:24:33.440 Copy: Not Supported 00:24:33.440 Volatile Write Cache: Not Present 00:24:33.440 Atomic Write Unit (Normal): 1 00:24:33.440 Atomic Write Unit (PFail): 1 00:24:33.440 Atomic Compare & Write Unit: 1 00:24:33.440 Fused Compare & Write: Supported 00:24:33.440 Scatter-Gather List 00:24:33.440 SGL Command Set: Supported 00:24:33.440 SGL Keyed: Supported 00:24:33.440 SGL Bit Bucket Descriptor: Not Supported 00:24:33.440 SGL Metadata Pointer: Not Supported 00:24:33.440 Oversized SGL: Not Supported 00:24:33.440 SGL Metadata Address: Not Supported 00:24:33.440 SGL Offset: Supported 00:24:33.440 Transport SGL Data Block: Not Supported 00:24:33.440 Replay Protected Memory Block: Not Supported 00:24:33.440 00:24:33.440 Firmware Slot Information 00:24:33.440 ========================= 00:24:33.440 Active slot: 0 00:24:33.440 00:24:33.440 00:24:33.440 Error Log 00:24:33.440 ========= 00:24:33.440 00:24:33.440 Active Namespaces 00:24:33.440 ================= 00:24:33.440 Discovery Log Page 00:24:33.440 ================== 00:24:33.440 Generation Counter: 2 00:24:33.440 Number of Records: 2 00:24:33.440 Record Format: 0 00:24:33.440 00:24:33.440 Discovery Log Entry 0 00:24:33.440 ---------------------- 00:24:33.440 Transport Type: 3 (TCP) 00:24:33.440 Address Family: 1 (IPv4) 00:24:33.440 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:33.440 Entry Flags: 00:24:33.440 Duplicate Returned Information: 1 00:24:33.440 Explicit Persistent Connection Support for Discovery: 1 00:24:33.440 Transport Requirements: 00:24:33.440 Secure Channel: Not Required 00:24:33.440 Port ID: 0 (0x0000) 00:24:33.440 Controller ID: 65535 (0xffff) 00:24:33.440 Admin Max SQ Size: 128 00:24:33.440 Transport Service Identifier: 4420 00:24:33.440 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:33.440 Transport Address: 10.0.0.2 00:24:33.440 Discovery Log Entry 1 00:24:33.440 ---------------------- 00:24:33.440 Transport Type: 3 (TCP) 00:24:33.440 Address Family: 1 (IPv4) 00:24:33.440 Subsystem Type: 2 (NVM Subsystem) 00:24:33.440 Entry Flags: 00:24:33.440 Duplicate Returned Information: 0 00:24:33.440 Explicit Persistent Connection Support for Discovery: 0 00:24:33.440 Transport Requirements: 00:24:33.440 Secure Channel: Not Required 00:24:33.440 Port ID: 0 (0x0000) 00:24:33.440 Controller ID: 65535 (0xffff) 00:24:33.440 Admin Max SQ Size: 128 00:24:33.440 Transport Service Identifier: 4420 00:24:33.440 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:33.440 Transport Address: 10.0.0.2 [2024-12-09 10:35:05.628376] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:33.440 [2024-12-09 10:35:05.628399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f100) on tqpair=0xecd690 00:24:33.440 [2024-12-09 10:35:05.628411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.440 [2024-12-09 10:35:05.628420] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f280) on tqpair=0xecd690 00:24:33.440 [2024-12-09 10:35:05.628428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.440 [2024-12-09 10:35:05.628435] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f400) on tqpair=0xecd690 00:24:33.440 [2024-12-09 10:35:05.628443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.440 [2024-12-09 10:35:05.628451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.440 [2024-12-09 10:35:05.628461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.440 [2024-12-09 10:35:05.628479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.440 [2024-12-09 10:35:05.628488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.440 [2024-12-09 10:35:05.628494] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.440 [2024-12-09 10:35:05.628520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.440 [2024-12-09 10:35:05.628545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.440 [2024-12-09 10:35:05.628637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.440 [2024-12-09 10:35:05.628652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.440 [2024-12-09 10:35:05.628659] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.440 [2024-12-09 10:35:05.628665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.440 [2024-12-09 10:35:05.628677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.440 [2024-12-09 10:35:05.628685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.440 [2024-12-09 10:35:05.628691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.440 [2024-12-09 10:35:05.628701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.440 [2024-12-09 10:35:05.628728] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.440 [2024-12-09 10:35:05.628818] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.440 [2024-12-09 10:35:05.628832] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.440 [2024-12-09 10:35:05.628838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.440 [2024-12-09 10:35:05.628845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.628853] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:33.441 [2024-12-09 10:35:05.628861] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:33.441 [2024-12-09 10:35:05.628877] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.628886] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.628892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.441 [2024-12-09 10:35:05.628903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.441 [2024-12-09 10:35:05.628923] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.441 [2024-12-09 10:35:05.629000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.441 [2024-12-09 10:35:05.629012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.441 [2024-12-09 10:35:05.629019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.629042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629051] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.441 [2024-12-09 10:35:05.629067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.441 [2024-12-09 10:35:05.629087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.441 [2024-12-09 10:35:05.629169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.441 [2024-12-09 10:35:05.629187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.441 [2024-12-09 10:35:05.629195] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.629218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629227] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.441 [2024-12-09 10:35:05.629243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.441 [2024-12-09 10:35:05.629264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.441 [2024-12-09 10:35:05.629345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.441 [2024-12-09 10:35:05.629357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.441 [2024-12-09 10:35:05.629364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.629386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.441 [2024-12-09 10:35:05.629412] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.441 [2024-12-09 10:35:05.629432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.441 [2024-12-09 10:35:05.629514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.441 [2024-12-09 10:35:05.629528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.441 [2024-12-09 10:35:05.629535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.629557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629566] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.441 [2024-12-09 10:35:05.629583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.441 [2024-12-09 10:35:05.629603] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.441 [2024-12-09 10:35:05.629694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.441 [2024-12-09 10:35:05.629708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.441 [2024-12-09 10:35:05.629715] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629721] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.629737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629752] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.441 [2024-12-09 10:35:05.629762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.441 [2024-12-09 10:35:05.629782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.441 [2024-12-09 10:35:05.629854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.441 [2024-12-09 10:35:05.629867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.441 [2024-12-09 10:35:05.629878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.629901] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629909] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.629916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.441 [2024-12-09 10:35:05.629926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.441 [2024-12-09 10:35:05.629946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.441 [2024-12-09 10:35:05.630037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.441 [2024-12-09 10:35:05.630051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.441 [2024-12-09 10:35:05.630057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.630080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.441 [2024-12-09 10:35:05.630106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.441 [2024-12-09 10:35:05.630126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.441 [2024-12-09 10:35:05.630226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.441 [2024-12-09 10:35:05.630240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.441 [2024-12-09 10:35:05.630247] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.630270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.441 [2024-12-09 10:35:05.630296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.441 [2024-12-09 10:35:05.630317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.441 [2024-12-09 10:35:05.630394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.441 [2024-12-09 10:35:05.630408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.441 [2024-12-09 10:35:05.630415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630422] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.630438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630447] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630453] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.441 [2024-12-09 10:35:05.630463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.441 [2024-12-09 10:35:05.630484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.441 [2024-12-09 10:35:05.630558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.441 [2024-12-09 10:35:05.630571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.441 [2024-12-09 10:35:05.630578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.630605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630615] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.441 [2024-12-09 10:35:05.630631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.441 [2024-12-09 10:35:05.630652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.441 [2024-12-09 10:35:05.630725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.441 [2024-12-09 10:35:05.630737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.441 [2024-12-09 10:35:05.630744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630751] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.441 [2024-12-09 10:35:05.630766] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.441 [2024-12-09 10:35:05.630775] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.630782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.442 [2024-12-09 10:35:05.630792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.442 [2024-12-09 10:35:05.630812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.442 [2024-12-09 10:35:05.630883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.442 [2024-12-09 10:35:05.630896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.442 [2024-12-09 10:35:05.630903] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.630909] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.442 [2024-12-09 10:35:05.630925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.630934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.630941] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.442 [2024-12-09 10:35:05.630951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.442 [2024-12-09 10:35:05.630971] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.442 [2024-12-09 10:35:05.631061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.442 [2024-12-09 10:35:05.631073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.442 [2024-12-09 10:35:05.631079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.631086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.442 [2024-12-09 10:35:05.631102] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.631111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.631117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xecd690) 00:24:33.442 [2024-12-09 10:35:05.631127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.442 [2024-12-09 10:35:05.635155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xf2f580, cid 3, qid 0 00:24:33.442 [2024-12-09 10:35:05.635275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.442 [2024-12-09 10:35:05.635288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.442 [2024-12-09 10:35:05.635295] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.635301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xf2f580) on tqpair=0xecd690 00:24:33.442 [2024-12-09 10:35:05.635319] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:24:33.442 00:24:33.442 10:35:05 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:33.442 [2024-12-09 10:35:05.752533] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:24:33.442 [2024-12-09 10:35:05.752576] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604664 ] 00:24:33.442 [2024-12-09 10:35:05.799979] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:33.442 [2024-12-09 10:35:05.800033] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:33.442 [2024-12-09 10:35:05.800043] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:33.442 [2024-12-09 10:35:05.800062] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:33.442 [2024-12-09 10:35:05.800074] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:33.442 [2024-12-09 10:35:05.803438] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:33.442 [2024-12-09 10:35:05.803479] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x129b690 0 00:24:33.442 [2024-12-09 10:35:05.811150] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:33.442 [2024-12-09 10:35:05.811171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:33.442 [2024-12-09 10:35:05.811184] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:33.442 [2024-12-09 10:35:05.811191] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:33.442 [2024-12-09 10:35:05.811225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.811237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.811244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129b690) 00:24:33.442 [2024-12-09 10:35:05.811257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:33.442 [2024-12-09 10:35:05.811285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd100, cid 0, qid 0 00:24:33.442 [2024-12-09 10:35:05.819156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.442 [2024-12-09 10:35:05.819172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.442 [2024-12-09 10:35:05.819180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.819187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd100) on tqpair=0x129b690 00:24:33.442 [2024-12-09 10:35:05.819200] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:33.442 [2024-12-09 10:35:05.819225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:33.442 [2024-12-09 10:35:05.819235] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:33.442 [2024-12-09 10:35:05.819254] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.819263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.819270] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129b690) 00:24:33.442 [2024-12-09 10:35:05.819281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.442 [2024-12-09 10:35:05.819311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd100, cid 0, qid 0 00:24:33.442 [2024-12-09 10:35:05.819427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.442 [2024-12-09 10:35:05.819441] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.442 [2024-12-09 10:35:05.819448] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.819455] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd100) on tqpair=0x129b690 00:24:33.442 [2024-12-09 10:35:05.819467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:33.442 [2024-12-09 10:35:05.819482] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:33.442 [2024-12-09 10:35:05.819495] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.819502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.819508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129b690) 00:24:33.442 [2024-12-09 10:35:05.819519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.442 [2024-12-09 10:35:05.819540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd100, cid 0, qid 0 00:24:33.442 [2024-12-09 10:35:05.819619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.442 [2024-12-09 10:35:05.819631] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.442 [2024-12-09 10:35:05.819638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.819645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd100) on tqpair=0x129b690 00:24:33.442 [2024-12-09 10:35:05.819654] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:33.442 [2024-12-09 10:35:05.819667] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:33.442 [2024-12-09 10:35:05.819679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.819687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.442 [2024-12-09 10:35:05.819693] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129b690) 00:24:33.442 [2024-12-09 10:35:05.819703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.443 [2024-12-09 10:35:05.819724] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd100, cid 0, qid 0 00:24:33.443 [2024-12-09 10:35:05.819803] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.443 [2024-12-09 10:35:05.819816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.443 [2024-12-09 10:35:05.819823] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.819830] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd100) on tqpair=0x129b690 00:24:33.443 [2024-12-09 10:35:05.819838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:33.443 [2024-12-09 10:35:05.819855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.819864] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.819871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129b690) 00:24:33.443 [2024-12-09 10:35:05.819881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.443 [2024-12-09 10:35:05.819902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd100, cid 0, qid 0 00:24:33.443 [2024-12-09 10:35:05.819977] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.443 [2024-12-09 10:35:05.819995] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.443 [2024-12-09 10:35:05.820002] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.820009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd100) on tqpair=0x129b690 00:24:33.443 [2024-12-09 10:35:05.820016] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:33.443 [2024-12-09 10:35:05.820025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:33.443 [2024-12-09 10:35:05.820038] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:33.443 [2024-12-09 10:35:05.820158] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:33.443 [2024-12-09 10:35:05.820172] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:33.443 [2024-12-09 10:35:05.820184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.820193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.820199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129b690) 00:24:33.443 [2024-12-09 10:35:05.820209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.443 [2024-12-09 10:35:05.820232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd100, cid 0, qid 0 00:24:33.443 [2024-12-09 10:35:05.820343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.443 [2024-12-09 10:35:05.820355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.443 [2024-12-09 10:35:05.820362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.820368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd100) on tqpair=0x129b690 00:24:33.443 [2024-12-09 10:35:05.820376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:33.443 [2024-12-09 10:35:05.820393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.820402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.820408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129b690) 00:24:33.443 [2024-12-09 10:35:05.820418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.443 [2024-12-09 10:35:05.820439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd100, cid 0, qid 0 00:24:33.443 [2024-12-09 10:35:05.820515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.443 [2024-12-09 10:35:05.820526] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.443 [2024-12-09 10:35:05.820533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.820540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd100) on tqpair=0x129b690 00:24:33.443 [2024-12-09 10:35:05.820547] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:33.443 [2024-12-09 10:35:05.820555] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:33.443 [2024-12-09 10:35:05.820569] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:33.443 [2024-12-09 10:35:05.820583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:33.443 [2024-12-09 10:35:05.820597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.820611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129b690) 00:24:33.443 [2024-12-09 10:35:05.820622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.443 [2024-12-09 10:35:05.820643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd100, cid 0, qid 0 00:24:33.443 [2024-12-09 10:35:05.820767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.443 [2024-12-09 10:35:05.820782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.443 [2024-12-09 10:35:05.820789] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.820795] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129b690): datao=0, datal=4096, cccid=0 00:24:33.443 [2024-12-09 10:35:05.820803] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12fd100) on tqpair(0x129b690): expected_datao=0, payload_size=4096 00:24:33.443 [2024-12-09 10:35:05.820810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.820828] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.820837] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.443 [2024-12-09 10:35:05.861262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.443 [2024-12-09 10:35:05.861270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861277] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd100) on tqpair=0x129b690 00:24:33.443 [2024-12-09 10:35:05.861294] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:33.443 [2024-12-09 10:35:05.861304] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:33.443 [2024-12-09 10:35:05.861311] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:33.443 [2024-12-09 10:35:05.861318] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:33.443 [2024-12-09 10:35:05.861325] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:33.443 [2024-12-09 10:35:05.861333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:33.443 [2024-12-09 10:35:05.861347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:33.443 [2024-12-09 10:35:05.861359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861374] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129b690) 00:24:33.443 [2024-12-09 10:35:05.861386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:33.443 [2024-12-09 10:35:05.861410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd100, cid 0, qid 0 00:24:33.443 [2024-12-09 10:35:05.861492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.443 [2024-12-09 10:35:05.861504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.443 [2024-12-09 10:35:05.861511] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd100) on tqpair=0x129b690 00:24:33.443 [2024-12-09 10:35:05.861527] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x129b690) 00:24:33.443 [2024-12-09 10:35:05.861555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.443 [2024-12-09 10:35:05.861566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x129b690) 00:24:33.443 [2024-12-09 10:35:05.861588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.443 [2024-12-09 10:35:05.861597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x129b690) 00:24:33.443 [2024-12-09 10:35:05.861618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.443 [2024-12-09 10:35:05.861628] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861634] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861640] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129b690) 00:24:33.443 [2024-12-09 10:35:05.861649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.443 [2024-12-09 10:35:05.861657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:33.443 [2024-12-09 10:35:05.861676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:33.443 [2024-12-09 10:35:05.861704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.443 [2024-12-09 10:35:05.861711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129b690) 00:24:33.443 [2024-12-09 10:35:05.861722] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.443 [2024-12-09 10:35:05.861744] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd100, cid 0, qid 0 00:24:33.443 [2024-12-09 10:35:05.861770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd280, cid 1, qid 0 00:24:33.444 [2024-12-09 10:35:05.861779] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd400, cid 2, qid 0 00:24:33.444 [2024-12-09 10:35:05.861786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd580, cid 3, qid 0 00:24:33.444 [2024-12-09 10:35:05.861794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd700, cid 4, qid 0 00:24:33.444 [2024-12-09 10:35:05.861918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.444 [2024-12-09 10:35:05.861931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.444 [2024-12-09 10:35:05.861937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.444 [2024-12-09 10:35:05.861944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd700) on tqpair=0x129b690 00:24:33.444 [2024-12-09 10:35:05.861952] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:33.444 [2024-12-09 10:35:05.861960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:33.444 [2024-12-09 10:35:05.861979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:33.444 [2024-12-09 10:35:05.861990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:33.444 [2024-12-09 10:35:05.862000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.444 [2024-12-09 10:35:05.862011] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.444 [2024-12-09 10:35:05.862018] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129b690) 00:24:33.444 [2024-12-09 10:35:05.862029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:33.444 [2024-12-09 10:35:05.862051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd700, cid 4, qid 0 00:24:33.444 [2024-12-09 10:35:05.862181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.444 [2024-12-09 10:35:05.862195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.444 [2024-12-09 10:35:05.862202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.444 [2024-12-09 10:35:05.862208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd700) on tqpair=0x129b690 00:24:33.444 [2024-12-09 10:35:05.862277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:33.444 [2024-12-09 10:35:05.862298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:33.444 [2024-12-09 10:35:05.862312] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.444 [2024-12-09 10:35:05.862320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129b690) 00:24:33.444 [2024-12-09 10:35:05.862331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.444 [2024-12-09 10:35:05.862352] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd700, cid 4, qid 0 00:24:33.444 [2024-12-09 10:35:05.862461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.444 [2024-12-09 10:35:05.862473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.444 [2024-12-09 10:35:05.862480] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.444 [2024-12-09 10:35:05.862486] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129b690): datao=0, datal=4096, cccid=4 00:24:33.444 [2024-12-09 10:35:05.862494] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12fd700) on tqpair(0x129b690): expected_datao=0, payload_size=4096 00:24:33.444 [2024-12-09 10:35:05.862501] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.444 [2024-12-09 10:35:05.862518] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.444 [2024-12-09 10:35:05.862528] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.903250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.703 [2024-12-09 10:35:05.903270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.703 [2024-12-09 10:35:05.903277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.903284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd700) on tqpair=0x129b690 00:24:33.703 [2024-12-09 10:35:05.903299] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:33.703 [2024-12-09 10:35:05.903323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:33.703 [2024-12-09 10:35:05.903342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:33.703 [2024-12-09 10:35:05.903356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.903364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129b690) 00:24:33.703 [2024-12-09 10:35:05.903375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.703 [2024-12-09 10:35:05.903398] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd700, cid 4, qid 0 00:24:33.703 [2024-12-09 10:35:05.903520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.703 [2024-12-09 10:35:05.903534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.703 [2024-12-09 10:35:05.903541] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.903547] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129b690): datao=0, datal=4096, cccid=4 00:24:33.703 [2024-12-09 10:35:05.903555] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12fd700) on tqpair(0x129b690): expected_datao=0, payload_size=4096 00:24:33.703 [2024-12-09 10:35:05.903562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.903573] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.903580] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.944243] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.703 [2024-12-09 10:35:05.944263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.703 [2024-12-09 10:35:05.944271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.944278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd700) on tqpair=0x129b690 00:24:33.703 [2024-12-09 10:35:05.944299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:33.703 [2024-12-09 10:35:05.944319] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:33.703 [2024-12-09 10:35:05.944334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.944342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129b690) 00:24:33.703 [2024-12-09 10:35:05.944354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.703 [2024-12-09 10:35:05.944377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd700, cid 4, qid 0 00:24:33.703 [2024-12-09 10:35:05.944475] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.703 [2024-12-09 10:35:05.944487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.703 [2024-12-09 10:35:05.944494] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.944501] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129b690): datao=0, datal=4096, cccid=4 00:24:33.703 [2024-12-09 10:35:05.944508] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12fd700) on tqpair(0x129b690): expected_datao=0, payload_size=4096 00:24:33.703 [2024-12-09 10:35:05.944515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.944532] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.944541] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.985245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.703 [2024-12-09 10:35:05.985263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.703 [2024-12-09 10:35:05.985271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.985278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd700) on tqpair=0x129b690 00:24:33.703 [2024-12-09 10:35:05.985291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:33.703 [2024-12-09 10:35:05.985306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:33.703 [2024-12-09 10:35:05.985322] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:33.703 [2024-12-09 10:35:05.985336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:33.703 [2024-12-09 10:35:05.985349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:33.703 [2024-12-09 10:35:05.985359] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:33.703 [2024-12-09 10:35:05.985367] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:33.703 [2024-12-09 10:35:05.985375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:33.703 [2024-12-09 10:35:05.985384] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:33.703 [2024-12-09 10:35:05.985402] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.985411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129b690) 00:24:33.703 [2024-12-09 10:35:05.985422] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.703 [2024-12-09 10:35:05.985433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.703 [2024-12-09 10:35:05.985441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.985447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x129b690) 00:24:33.704 [2024-12-09 10:35:05.985456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:33.704 [2024-12-09 10:35:05.985482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd700, cid 4, qid 0 00:24:33.704 [2024-12-09 10:35:05.985495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd880, cid 5, qid 0 00:24:33.704 [2024-12-09 10:35:05.985584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.704 [2024-12-09 10:35:05.985596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.704 [2024-12-09 10:35:05.985603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.985609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd700) on tqpair=0x129b690 00:24:33.704 [2024-12-09 10:35:05.985619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.704 [2024-12-09 10:35:05.985629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.704 [2024-12-09 10:35:05.985635] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.985642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd880) on tqpair=0x129b690 00:24:33.704 [2024-12-09 10:35:05.985657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.985665] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x129b690) 00:24:33.704 [2024-12-09 10:35:05.985676] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.704 [2024-12-09 10:35:05.985697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd880, cid 5, qid 0 00:24:33.704 [2024-12-09 10:35:05.985776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.704 [2024-12-09 10:35:05.985788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.704 [2024-12-09 10:35:05.985795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.985801] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd880) on tqpair=0x129b690 00:24:33.704 [2024-12-09 10:35:05.985816] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.985825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x129b690) 00:24:33.704 [2024-12-09 10:35:05.985835] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.704 [2024-12-09 10:35:05.985859] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd880, cid 5, qid 0 00:24:33.704 [2024-12-09 10:35:05.985931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.704 [2024-12-09 10:35:05.985943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.704 [2024-12-09 10:35:05.985949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.985956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd880) on tqpair=0x129b690 00:24:33.704 [2024-12-09 10:35:05.985971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.985980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x129b690) 00:24:33.704 [2024-12-09 10:35:05.985990] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.704 [2024-12-09 10:35:05.986010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd880, cid 5, qid 0 00:24:33.704 [2024-12-09 10:35:05.986087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.704 [2024-12-09 10:35:05.986101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.704 [2024-12-09 10:35:05.986108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd880) on tqpair=0x129b690 00:24:33.704 [2024-12-09 10:35:05.986150] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986163] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x129b690) 00:24:33.704 [2024-12-09 10:35:05.986174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.704 [2024-12-09 10:35:05.986187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x129b690) 00:24:33.704 [2024-12-09 10:35:05.986204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.704 [2024-12-09 10:35:05.986215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x129b690) 00:24:33.704 [2024-12-09 10:35:05.986232] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.704 [2024-12-09 10:35:05.986244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x129b690) 00:24:33.704 [2024-12-09 10:35:05.986260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.704 [2024-12-09 10:35:05.986283] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd880, cid 5, qid 0 00:24:33.704 [2024-12-09 10:35:05.986294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd700, cid 4, qid 0 00:24:33.704 [2024-12-09 10:35:05.986302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fda00, cid 6, qid 0 00:24:33.704 [2024-12-09 10:35:05.986309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fdb80, cid 7, qid 0 00:24:33.704 [2024-12-09 10:35:05.986515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.704 [2024-12-09 10:35:05.986530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.704 [2024-12-09 10:35:05.986537] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986543] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129b690): datao=0, datal=8192, cccid=5 00:24:33.704 [2024-12-09 10:35:05.986551] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12fd880) on tqpair(0x129b690): expected_datao=0, payload_size=8192 00:24:33.704 [2024-12-09 10:35:05.986562] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986573] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986580] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.704 [2024-12-09 10:35:05.986598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.704 [2024-12-09 10:35:05.986604] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986611] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129b690): datao=0, datal=512, cccid=4 00:24:33.704 [2024-12-09 10:35:05.986618] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12fd700) on tqpair(0x129b690): expected_datao=0, payload_size=512 00:24:33.704 [2024-12-09 10:35:05.986625] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986634] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986641] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.704 [2024-12-09 10:35:05.986658] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.704 [2024-12-09 10:35:05.986664] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986670] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129b690): datao=0, datal=512, cccid=6 00:24:33.704 [2024-12-09 10:35:05.986677] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12fda00) on tqpair(0x129b690): expected_datao=0, payload_size=512 00:24:33.704 [2024-12-09 10:35:05.986685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986694] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986700] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:33.704 [2024-12-09 10:35:05.986717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:33.704 [2024-12-09 10:35:05.986724] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986730] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x129b690): datao=0, datal=4096, cccid=7 00:24:33.704 [2024-12-09 10:35:05.986737] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12fdb80) on tqpair(0x129b690): expected_datao=0, payload_size=4096 00:24:33.704 [2024-12-09 10:35:05.986744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986753] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986760] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.704 [2024-12-09 10:35:05.986781] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.704 [2024-12-09 10:35:05.986788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986809] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd880) on tqpair=0x129b690 00:24:33.704 [2024-12-09 10:35:05.986832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.704 [2024-12-09 10:35:05.986843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.704 [2024-12-09 10:35:05.986850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd700) on tqpair=0x129b690 00:24:33.704 [2024-12-09 10:35:05.986886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.704 [2024-12-09 10:35:05.986896] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.704 [2024-12-09 10:35:05.986902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fda00) on tqpair=0x129b690 00:24:33.704 [2024-12-09 10:35:05.986921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.704 [2024-12-09 10:35:05.986931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.704 [2024-12-09 10:35:05.986937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.704 [2024-12-09 10:35:05.986943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fdb80) on tqpair=0x129b690 00:24:33.704 ===================================================== 00:24:33.704 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:33.704 ===================================================== 00:24:33.704 Controller Capabilities/Features 00:24:33.704 ================================ 00:24:33.704 Vendor ID: 8086 00:24:33.704 Subsystem Vendor ID: 8086 00:24:33.704 Serial Number: SPDK00000000000001 00:24:33.704 Model Number: SPDK bdev Controller 00:24:33.705 Firmware Version: 25.01 00:24:33.705 Recommended Arb Burst: 6 00:24:33.705 IEEE OUI Identifier: e4 d2 5c 00:24:33.705 Multi-path I/O 00:24:33.705 May have multiple subsystem ports: Yes 00:24:33.705 May have multiple controllers: Yes 00:24:33.705 Associated with SR-IOV VF: No 00:24:33.705 Max Data Transfer Size: 131072 00:24:33.705 Max Number of Namespaces: 32 00:24:33.705 Max Number of I/O Queues: 127 00:24:33.705 NVMe Specification Version (VS): 1.3 00:24:33.705 NVMe Specification Version (Identify): 1.3 00:24:33.705 Maximum Queue Entries: 128 00:24:33.705 Contiguous Queues Required: Yes 00:24:33.705 Arbitration Mechanisms Supported 00:24:33.705 Weighted Round Robin: Not Supported 00:24:33.705 Vendor Specific: Not Supported 00:24:33.705 Reset Timeout: 15000 ms 00:24:33.705 Doorbell Stride: 4 bytes 00:24:33.705 NVM Subsystem Reset: Not Supported 00:24:33.705 Command Sets Supported 00:24:33.705 NVM Command Set: Supported 00:24:33.705 Boot Partition: Not Supported 00:24:33.705 Memory Page Size Minimum: 4096 bytes 00:24:33.705 Memory Page Size Maximum: 4096 bytes 00:24:33.705 Persistent Memory Region: Not Supported 00:24:33.705 Optional Asynchronous Events Supported 00:24:33.705 Namespace Attribute Notices: Supported 00:24:33.705 Firmware Activation Notices: Not Supported 00:24:33.705 ANA Change Notices: Not Supported 00:24:33.705 PLE Aggregate Log Change Notices: Not Supported 00:24:33.705 LBA Status Info Alert Notices: Not Supported 00:24:33.705 EGE Aggregate Log Change Notices: Not Supported 00:24:33.705 Normal NVM Subsystem Shutdown event: Not Supported 00:24:33.705 Zone Descriptor Change Notices: Not Supported 00:24:33.705 Discovery Log Change Notices: Not Supported 00:24:33.705 Controller Attributes 00:24:33.705 128-bit Host Identifier: Supported 00:24:33.705 Non-Operational Permissive Mode: Not Supported 00:24:33.705 NVM Sets: Not Supported 00:24:33.705 Read Recovery Levels: Not Supported 00:24:33.705 Endurance Groups: Not Supported 00:24:33.705 Predictable Latency Mode: Not Supported 00:24:33.705 Traffic Based Keep ALive: Not Supported 00:24:33.705 Namespace Granularity: Not Supported 00:24:33.705 SQ Associations: Not Supported 00:24:33.705 UUID List: Not Supported 00:24:33.705 Multi-Domain Subsystem: Not Supported 00:24:33.705 Fixed Capacity Management: Not Supported 00:24:33.705 Variable Capacity Management: Not Supported 00:24:33.705 Delete Endurance Group: Not Supported 00:24:33.705 Delete NVM Set: Not Supported 00:24:33.705 Extended LBA Formats Supported: Not Supported 00:24:33.705 Flexible Data Placement Supported: Not Supported 00:24:33.705 00:24:33.705 Controller Memory Buffer Support 00:24:33.705 ================================ 00:24:33.705 Supported: No 00:24:33.705 00:24:33.705 Persistent Memory Region Support 00:24:33.705 ================================ 00:24:33.705 Supported: No 00:24:33.705 00:24:33.705 Admin Command Set Attributes 00:24:33.705 ============================ 00:24:33.705 Security Send/Receive: Not Supported 00:24:33.705 Format NVM: Not Supported 00:24:33.705 Firmware Activate/Download: Not Supported 00:24:33.705 Namespace Management: Not Supported 00:24:33.705 Device Self-Test: Not Supported 00:24:33.705 Directives: Not Supported 00:24:33.705 NVMe-MI: Not Supported 00:24:33.705 Virtualization Management: Not Supported 00:24:33.705 Doorbell Buffer Config: Not Supported 00:24:33.705 Get LBA Status Capability: Not Supported 00:24:33.705 Command & Feature Lockdown Capability: Not Supported 00:24:33.705 Abort Command Limit: 4 00:24:33.705 Async Event Request Limit: 4 00:24:33.705 Number of Firmware Slots: N/A 00:24:33.705 Firmware Slot 1 Read-Only: N/A 00:24:33.705 Firmware Activation Without Reset: N/A 00:24:33.705 Multiple Update Detection Support: N/A 00:24:33.705 Firmware Update Granularity: No Information Provided 00:24:33.705 Per-Namespace SMART Log: No 00:24:33.705 Asymmetric Namespace Access Log Page: Not Supported 00:24:33.705 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:33.705 Command Effects Log Page: Supported 00:24:33.705 Get Log Page Extended Data: Supported 00:24:33.705 Telemetry Log Pages: Not Supported 00:24:33.705 Persistent Event Log Pages: Not Supported 00:24:33.705 Supported Log Pages Log Page: May Support 00:24:33.705 Commands Supported & Effects Log Page: Not Supported 00:24:33.705 Feature Identifiers & Effects Log Page:May Support 00:24:33.705 NVMe-MI Commands & Effects Log Page: May Support 00:24:33.705 Data Area 4 for Telemetry Log: Not Supported 00:24:33.705 Error Log Page Entries Supported: 128 00:24:33.705 Keep Alive: Supported 00:24:33.705 Keep Alive Granularity: 10000 ms 00:24:33.705 00:24:33.705 NVM Command Set Attributes 00:24:33.705 ========================== 00:24:33.705 Submission Queue Entry Size 00:24:33.705 Max: 64 00:24:33.705 Min: 64 00:24:33.705 Completion Queue Entry Size 00:24:33.705 Max: 16 00:24:33.705 Min: 16 00:24:33.705 Number of Namespaces: 32 00:24:33.705 Compare Command: Supported 00:24:33.705 Write Uncorrectable Command: Not Supported 00:24:33.705 Dataset Management Command: Supported 00:24:33.705 Write Zeroes Command: Supported 00:24:33.705 Set Features Save Field: Not Supported 00:24:33.705 Reservations: Supported 00:24:33.705 Timestamp: Not Supported 00:24:33.705 Copy: Supported 00:24:33.705 Volatile Write Cache: Present 00:24:33.705 Atomic Write Unit (Normal): 1 00:24:33.705 Atomic Write Unit (PFail): 1 00:24:33.705 Atomic Compare & Write Unit: 1 00:24:33.705 Fused Compare & Write: Supported 00:24:33.705 Scatter-Gather List 00:24:33.705 SGL Command Set: Supported 00:24:33.705 SGL Keyed: Supported 00:24:33.705 SGL Bit Bucket Descriptor: Not Supported 00:24:33.705 SGL Metadata Pointer: Not Supported 00:24:33.705 Oversized SGL: Not Supported 00:24:33.705 SGL Metadata Address: Not Supported 00:24:33.705 SGL Offset: Supported 00:24:33.705 Transport SGL Data Block: Not Supported 00:24:33.705 Replay Protected Memory Block: Not Supported 00:24:33.705 00:24:33.705 Firmware Slot Information 00:24:33.705 ========================= 00:24:33.705 Active slot: 1 00:24:33.705 Slot 1 Firmware Revision: 25.01 00:24:33.705 00:24:33.705 00:24:33.705 Commands Supported and Effects 00:24:33.705 ============================== 00:24:33.705 Admin Commands 00:24:33.705 -------------- 00:24:33.705 Get Log Page (02h): Supported 00:24:33.705 Identify (06h): Supported 00:24:33.705 Abort (08h): Supported 00:24:33.705 Set Features (09h): Supported 00:24:33.705 Get Features (0Ah): Supported 00:24:33.705 Asynchronous Event Request (0Ch): Supported 00:24:33.705 Keep Alive (18h): Supported 00:24:33.705 I/O Commands 00:24:33.705 ------------ 00:24:33.705 Flush (00h): Supported LBA-Change 00:24:33.705 Write (01h): Supported LBA-Change 00:24:33.705 Read (02h): Supported 00:24:33.705 Compare (05h): Supported 00:24:33.705 Write Zeroes (08h): Supported LBA-Change 00:24:33.705 Dataset Management (09h): Supported LBA-Change 00:24:33.705 Copy (19h): Supported LBA-Change 00:24:33.705 00:24:33.705 Error Log 00:24:33.705 ========= 00:24:33.705 00:24:33.705 Arbitration 00:24:33.705 =========== 00:24:33.705 Arbitration Burst: 1 00:24:33.705 00:24:33.705 Power Management 00:24:33.705 ================ 00:24:33.705 Number of Power States: 1 00:24:33.705 Current Power State: Power State #0 00:24:33.705 Power State #0: 00:24:33.705 Max Power: 0.00 W 00:24:33.705 Non-Operational State: Operational 00:24:33.705 Entry Latency: Not Reported 00:24:33.705 Exit Latency: Not Reported 00:24:33.705 Relative Read Throughput: 0 00:24:33.705 Relative Read Latency: 0 00:24:33.705 Relative Write Throughput: 0 00:24:33.705 Relative Write Latency: 0 00:24:33.705 Idle Power: Not Reported 00:24:33.705 Active Power: Not Reported 00:24:33.705 Non-Operational Permissive Mode: Not Supported 00:24:33.705 00:24:33.705 Health Information 00:24:33.705 ================== 00:24:33.705 Critical Warnings: 00:24:33.705 Available Spare Space: OK 00:24:33.705 Temperature: OK 00:24:33.705 Device Reliability: OK 00:24:33.705 Read Only: No 00:24:33.705 Volatile Memory Backup: OK 00:24:33.705 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:33.705 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:33.705 Available Spare: 0% 00:24:33.705 Available Spare Threshold: 0% 00:24:33.705 Life Percentage Used:[2024-12-09 10:35:05.987063] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.705 [2024-12-09 10:35:05.987075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x129b690) 00:24:33.706 [2024-12-09 10:35:05.987085] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.706 [2024-12-09 10:35:05.987107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fdb80, cid 7, qid 0 00:24:33.706 [2024-12-09 10:35:05.987248] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.706 [2024-12-09 10:35:05.987262] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.706 [2024-12-09 10:35:05.987269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.987276] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fdb80) on tqpair=0x129b690 00:24:33.706 [2024-12-09 10:35:05.987322] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:33.706 [2024-12-09 10:35:05.987342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd100) on tqpair=0x129b690 00:24:33.706 [2024-12-09 10:35:05.987352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.706 [2024-12-09 10:35:05.987361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd280) on tqpair=0x129b690 00:24:33.706 [2024-12-09 10:35:05.987368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.706 [2024-12-09 10:35:05.987376] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd400) on tqpair=0x129b690 00:24:33.706 [2024-12-09 10:35:05.987384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.706 [2024-12-09 10:35:05.987392] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd580) on tqpair=0x129b690 00:24:33.706 [2024-12-09 10:35:05.987399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:33.706 [2024-12-09 10:35:05.987411] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.987418] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.987425] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129b690) 00:24:33.706 [2024-12-09 10:35:05.987435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.706 [2024-12-09 10:35:05.987472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd580, cid 3, qid 0 00:24:33.706 [2024-12-09 10:35:05.987612] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.706 [2024-12-09 10:35:05.987625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.706 [2024-12-09 10:35:05.987632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.987638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd580) on tqpair=0x129b690 00:24:33.706 [2024-12-09 10:35:05.987648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.987656] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.987663] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129b690) 00:24:33.706 [2024-12-09 10:35:05.987673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.706 [2024-12-09 10:35:05.987703] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd580, cid 3, qid 0 00:24:33.706 [2024-12-09 10:35:05.987794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.706 [2024-12-09 10:35:05.987808] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.706 [2024-12-09 10:35:05.987815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.987821] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd580) on tqpair=0x129b690 00:24:33.706 [2024-12-09 10:35:05.987829] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:33.706 [2024-12-09 10:35:05.987836] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:33.706 [2024-12-09 10:35:05.987852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.987861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.987867] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129b690) 00:24:33.706 [2024-12-09 10:35:05.987878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.706 [2024-12-09 10:35:05.987898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd580, cid 3, qid 0 00:24:33.706 [2024-12-09 10:35:05.987973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.706 [2024-12-09 10:35:05.987985] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.706 [2024-12-09 10:35:05.987992] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.987998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd580) on tqpair=0x129b690 00:24:33.706 [2024-12-09 10:35:05.988014] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.988023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.988030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129b690) 00:24:33.706 [2024-12-09 10:35:05.988040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.706 [2024-12-09 10:35:05.988059] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd580, cid 3, qid 0 00:24:33.706 [2024-12-09 10:35:05.988136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.706 [2024-12-09 10:35:05.992161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.706 [2024-12-09 10:35:05.992170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.992177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd580) on tqpair=0x129b690 00:24:33.706 [2024-12-09 10:35:05.992209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.992219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.992225] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x129b690) 00:24:33.706 [2024-12-09 10:35:05.992236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:33.706 [2024-12-09 10:35:05.992258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12fd580, cid 3, qid 0 00:24:33.706 [2024-12-09 10:35:05.992375] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:33.706 [2024-12-09 10:35:05.992388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:33.706 [2024-12-09 10:35:05.992394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:33.706 [2024-12-09 10:35:05.992401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12fd580) on tqpair=0x129b690 00:24:33.706 [2024-12-09 10:35:05.992414] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:24:33.706 0% 00:24:33.706 Data Units Read: 0 00:24:33.706 Data Units Written: 0 00:24:33.706 Host Read Commands: 0 00:24:33.706 Host Write Commands: 0 00:24:33.706 Controller Busy Time: 0 minutes 00:24:33.706 Power Cycles: 0 00:24:33.706 Power On Hours: 0 hours 00:24:33.706 Unsafe Shutdowns: 0 00:24:33.706 Unrecoverable Media Errors: 0 00:24:33.706 Lifetime Error Log Entries: 0 00:24:33.706 Warning Temperature Time: 0 minutes 00:24:33.706 Critical Temperature Time: 0 minutes 00:24:33.706 00:24:33.706 Number of Queues 00:24:33.706 ================ 00:24:33.706 Number of I/O Submission Queues: 127 00:24:33.706 Number of I/O Completion Queues: 127 00:24:33.706 00:24:33.706 Active Namespaces 00:24:33.706 ================= 00:24:33.706 Namespace ID:1 00:24:33.706 Error Recovery Timeout: Unlimited 00:24:33.706 Command Set Identifier: NVM (00h) 00:24:33.706 Deallocate: Supported 00:24:33.706 Deallocated/Unwritten Error: Not Supported 00:24:33.706 Deallocated Read Value: Unknown 00:24:33.706 Deallocate in Write Zeroes: Not Supported 00:24:33.706 Deallocated Guard Field: 0xFFFF 00:24:33.706 Flush: Supported 00:24:33.706 Reservation: Supported 00:24:33.706 Namespace Sharing Capabilities: Multiple Controllers 00:24:33.706 Size (in LBAs): 131072 (0GiB) 00:24:33.706 Capacity (in LBAs): 131072 (0GiB) 00:24:33.706 Utilization (in LBAs): 131072 (0GiB) 00:24:33.706 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:33.706 EUI64: ABCDEF0123456789 00:24:33.706 UUID: 649803ea-2031-4e78-bca5-5b6f47797143 00:24:33.706 Thin Provisioning: Not Supported 00:24:33.706 Per-NS Atomic Units: Yes 00:24:33.706 Atomic Boundary Size (Normal): 0 00:24:33.706 Atomic Boundary Size (PFail): 0 00:24:33.706 Atomic Boundary Offset: 0 00:24:33.706 Maximum Single Source Range Length: 65535 00:24:33.706 Maximum Copy Length: 65535 00:24:33.706 Maximum Source Range Count: 1 00:24:33.706 NGUID/EUI64 Never Reused: No 00:24:33.706 Namespace Write Protected: No 00:24:33.706 Number of LBA Formats: 1 00:24:33.706 Current LBA Format: LBA Format #00 00:24:33.706 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:33.706 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:33.706 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:33.706 rmmod nvme_tcp 00:24:33.706 rmmod nvme_fabrics 00:24:33.706 rmmod nvme_keyring 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2604627 ']' 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2604627 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2604627 ']' 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2604627 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2604627 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2604627' 00:24:33.964 killing process with pid 2604627 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2604627 00:24:33.964 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2604627 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.223 10:35:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.125 10:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:36.125 00:24:36.125 real 0m5.951s 00:24:36.125 user 0m6.006s 00:24:36.125 sys 0m1.985s 00:24:36.125 10:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.125 10:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:36.125 ************************************ 00:24:36.125 END TEST nvmf_identify 00:24:36.125 ************************************ 00:24:36.125 10:35:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:36.125 10:35:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:36.125 10:35:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.125 10:35:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.125 ************************************ 00:24:36.125 START TEST nvmf_perf 00:24:36.125 ************************************ 00:24:36.125 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:36.386 * Looking for test storage... 00:24:36.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:36.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.386 --rc genhtml_branch_coverage=1 00:24:36.386 --rc genhtml_function_coverage=1 00:24:36.386 --rc genhtml_legend=1 00:24:36.386 --rc geninfo_all_blocks=1 00:24:36.386 --rc geninfo_unexecuted_blocks=1 00:24:36.386 00:24:36.386 ' 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:36.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.386 --rc genhtml_branch_coverage=1 00:24:36.386 --rc genhtml_function_coverage=1 00:24:36.386 --rc genhtml_legend=1 00:24:36.386 --rc geninfo_all_blocks=1 00:24:36.386 --rc geninfo_unexecuted_blocks=1 00:24:36.386 00:24:36.386 ' 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:36.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.386 --rc genhtml_branch_coverage=1 00:24:36.386 --rc genhtml_function_coverage=1 00:24:36.386 --rc genhtml_legend=1 00:24:36.386 --rc geninfo_all_blocks=1 00:24:36.386 --rc geninfo_unexecuted_blocks=1 00:24:36.386 00:24:36.386 ' 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:36.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.386 --rc genhtml_branch_coverage=1 00:24:36.386 --rc genhtml_function_coverage=1 00:24:36.386 --rc genhtml_legend=1 00:24:36.386 --rc geninfo_all_blocks=1 00:24:36.386 --rc geninfo_unexecuted_blocks=1 00:24:36.386 00:24:36.386 ' 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.386 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:36.387 10:35:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:38.996 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:38.996 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:38.996 Found net devices under 0000:09:00.0: cvl_0_0 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.996 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:38.997 Found net devices under 0000:09:00.1: cvl_0_1 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.997 10:35:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:38.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:24:38.997 00:24:38.997 --- 10.0.0.2 ping statistics --- 00:24:38.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.997 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:24:38.997 00:24:38.997 --- 10.0.0.1 ping statistics --- 00:24:38.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.997 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2606729 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2606729 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2606729 ']' 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.997 [2024-12-09 10:35:11.092245] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:24:38.997 [2024-12-09 10:35:11.092329] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.997 [2024-12-09 10:35:11.161553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:38.997 [2024-12-09 10:35:11.216012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.997 [2024-12-09 10:35:11.216071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.997 [2024-12-09 10:35:11.216109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.997 [2024-12-09 10:35:11.216119] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.997 [2024-12-09 10:35:11.216128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.997 [2024-12-09 10:35:11.217756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.997 [2024-12-09 10:35:11.217862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.997 [2024-12-09 10:35:11.217938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:38.997 [2024-12-09 10:35:11.217941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:38.997 10:35:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:42.277 10:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:42.277 10:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:42.535 10:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:24:42.535 10:35:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:42.793 10:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:42.793 10:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:24:42.793 10:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:42.793 10:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:42.793 10:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:43.051 [2024-12-09 10:35:15.371736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.051 10:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:43.308 10:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:43.308 10:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:43.565 10:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:43.565 10:35:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:43.823 10:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.080 [2024-12-09 10:35:16.463812] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.080 10:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:44.337 10:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:24:44.337 10:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:24:44.337 10:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:44.337 10:35:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:24:45.707 Initializing NVMe Controllers 00:24:45.707 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:24:45.707 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:24:45.707 Initialization complete. Launching workers. 00:24:45.707 ======================================================== 00:24:45.707 Latency(us) 00:24:45.707 Device Information : IOPS MiB/s Average min max 00:24:45.707 PCIE (0000:0b:00.0) NSID 1 from core 0: 83597.75 326.55 381.99 33.37 4570.70 00:24:45.707 ======================================================== 00:24:45.707 Total : 83597.75 326.55 381.99 33.37 4570.70 00:24:45.707 00:24:45.707 10:35:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:47.075 Initializing NVMe Controllers 00:24:47.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:47.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:47.076 Initialization complete. Launching workers. 00:24:47.076 ======================================================== 00:24:47.076 Latency(us) 00:24:47.076 Device Information : IOPS MiB/s Average min max 00:24:47.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 123.57 0.48 8302.98 134.76 44787.99 00:24:47.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 44.84 0.18 23008.93 7480.40 47934.74 00:24:47.076 ======================================================== 00:24:47.076 Total : 168.41 0.66 12218.76 134.76 47934.74 00:24:47.076 00:24:47.332 10:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:48.704 Initializing NVMe Controllers 00:24:48.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:48.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:48.704 Initialization complete. Launching workers. 00:24:48.704 ======================================================== 00:24:48.704 Latency(us) 00:24:48.704 Device Information : IOPS MiB/s Average min max 00:24:48.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8361.31 32.66 3827.63 708.96 7682.34 00:24:48.704 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3916.69 15.30 8217.81 5571.74 16325.70 00:24:48.704 ======================================================== 00:24:48.704 Total : 12277.99 47.96 5228.10 708.96 16325.70 00:24:48.704 00:24:48.704 10:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:48.704 10:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:48.704 10:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:51.232 Initializing NVMe Controllers 00:24:51.232 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:51.232 Controller IO queue size 128, less than required. 00:24:51.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:51.232 Controller IO queue size 128, less than required. 00:24:51.232 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:51.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:51.232 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:51.232 Initialization complete. Launching workers. 00:24:51.232 ======================================================== 00:24:51.232 Latency(us) 00:24:51.232 Device Information : IOPS MiB/s Average min max 00:24:51.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1703.95 425.99 76387.50 47094.92 117256.67 00:24:51.232 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 611.48 152.87 216407.32 112454.05 320598.71 00:24:51.232 ======================================================== 00:24:51.232 Total : 2315.44 578.86 113365.31 47094.92 320598.71 00:24:51.232 00:24:51.490 10:35:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:51.490 No valid NVMe controllers or AIO or URING devices found 00:24:51.490 Initializing NVMe Controllers 00:24:51.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:51.490 Controller IO queue size 128, less than required. 00:24:51.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:51.490 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:51.490 Controller IO queue size 128, less than required. 00:24:51.490 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:51.490 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:51.490 WARNING: Some requested NVMe devices were skipped 00:24:51.748 10:35:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:54.294 Initializing NVMe Controllers 00:24:54.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:54.294 Controller IO queue size 128, less than required. 00:24:54.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:54.294 Controller IO queue size 128, less than required. 00:24:54.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:54.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:54.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:54.294 Initialization complete. Launching workers. 00:24:54.294 00:24:54.294 ==================== 00:24:54.294 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:54.294 TCP transport: 00:24:54.294 polls: 11295 00:24:54.294 idle_polls: 7924 00:24:54.294 sock_completions: 3371 00:24:54.294 nvme_completions: 6221 00:24:54.294 submitted_requests: 9368 00:24:54.294 queued_requests: 1 00:24:54.294 00:24:54.294 ==================== 00:24:54.294 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:54.294 TCP transport: 00:24:54.294 polls: 14536 00:24:54.294 idle_polls: 11343 00:24:54.294 sock_completions: 3193 00:24:54.294 nvme_completions: 5727 00:24:54.294 submitted_requests: 8656 00:24:54.294 queued_requests: 1 00:24:54.294 ======================================================== 00:24:54.294 Latency(us) 00:24:54.294 Device Information : IOPS MiB/s Average min max 00:24:54.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1554.26 388.57 84416.16 55491.03 159091.60 00:24:54.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1430.82 357.70 91215.01 39596.05 150262.13 00:24:54.294 ======================================================== 00:24:54.294 Total : 2985.08 746.27 87675.01 39596.05 159091.60 00:24:54.294 00:24:54.551 10:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:54.551 10:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.808 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:54.808 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:54.808 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:54.808 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:54.808 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:54.808 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:54.808 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:54.808 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:54.808 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:54.808 rmmod nvme_tcp 00:24:54.808 rmmod nvme_fabrics 00:24:54.809 rmmod nvme_keyring 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2606729 ']' 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2606729 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2606729 ']' 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2606729 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2606729 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2606729' 00:24:54.809 killing process with pid 2606729 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2606729 00:24:54.809 10:35:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2606729 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.702 10:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.603 10:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:58.603 00:24:58.603 real 0m22.298s 00:24:58.603 user 1m8.910s 00:24:58.603 sys 0m5.768s 00:24:58.603 10:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.603 10:35:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:58.603 ************************************ 00:24:58.603 END TEST nvmf_perf 00:24:58.603 ************************************ 00:24:58.603 10:35:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:58.603 10:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:58.603 10:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.603 10:35:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.603 ************************************ 00:24:58.603 START TEST nvmf_fio_host 00:24:58.603 ************************************ 00:24:58.603 10:35:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:58.603 * Looking for test storage... 00:24:58.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:58.603 10:35:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:58.603 10:35:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:58.603 10:35:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:58.862 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:58.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.863 --rc genhtml_branch_coverage=1 00:24:58.863 --rc genhtml_function_coverage=1 00:24:58.863 --rc genhtml_legend=1 00:24:58.863 --rc geninfo_all_blocks=1 00:24:58.863 --rc geninfo_unexecuted_blocks=1 00:24:58.863 00:24:58.863 ' 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:58.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.863 --rc genhtml_branch_coverage=1 00:24:58.863 --rc genhtml_function_coverage=1 00:24:58.863 --rc genhtml_legend=1 00:24:58.863 --rc geninfo_all_blocks=1 00:24:58.863 --rc geninfo_unexecuted_blocks=1 00:24:58.863 00:24:58.863 ' 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:58.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.863 --rc genhtml_branch_coverage=1 00:24:58.863 --rc genhtml_function_coverage=1 00:24:58.863 --rc genhtml_legend=1 00:24:58.863 --rc geninfo_all_blocks=1 00:24:58.863 --rc geninfo_unexecuted_blocks=1 00:24:58.863 00:24:58.863 ' 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:58.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:58.863 --rc genhtml_branch_coverage=1 00:24:58.863 --rc genhtml_function_coverage=1 00:24:58.863 --rc genhtml_legend=1 00:24:58.863 --rc geninfo_all_blocks=1 00:24:58.863 --rc geninfo_unexecuted_blocks=1 00:24:58.863 00:24:58.863 ' 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.863 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:58.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:58.864 10:35:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.766 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.766 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:00.766 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:00.766 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:00.766 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:00.767 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:00.767 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:00.767 Found net devices under 0000:09:00.0: cvl_0_0 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:00.767 Found net devices under 0000:09:00.1: cvl_0_1 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.767 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:01.027 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:01.027 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:01.027 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:01.027 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:01.027 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:01.027 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:01.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:01.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:25:01.027 00:25:01.027 --- 10.0.0.2 ping statistics --- 00:25:01.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.027 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:25:01.027 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:25:01.027 00:25:01.027 --- 10.0.0.1 ping statistics --- 00:25:01.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.027 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:25:01.027 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.027 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:01.027 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:01.027 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2610712 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2610712 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2610712 ']' 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.028 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.028 [2024-12-09 10:35:33.358241] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:25:01.028 [2024-12-09 10:35:33.358326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.028 [2024-12-09 10:35:33.439066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:01.306 [2024-12-09 10:35:33.501847] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.306 [2024-12-09 10:35:33.501904] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.306 [2024-12-09 10:35:33.501933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.306 [2024-12-09 10:35:33.501944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.306 [2024-12-09 10:35:33.501954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.306 [2024-12-09 10:35:33.503720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.306 [2024-12-09 10:35:33.503778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.306 [2024-12-09 10:35:33.503846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:01.306 [2024-12-09 10:35:33.503849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.306 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.306 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:01.306 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:01.563 [2024-12-09 10:35:33.894247] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.563 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:01.563 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:01.563 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.563 10:35:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:02.127 Malloc1 00:25:02.128 10:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:02.385 10:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:02.643 10:35:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.901 [2024-12-09 10:35:35.160113] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.901 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:03.159 10:35:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:03.415 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:03.415 fio-3.35 00:25:03.415 Starting 1 thread 00:25:05.941 00:25:05.941 test: (groupid=0, jobs=1): err= 0: pid=2611185: Mon Dec 9 10:35:38 2024 00:25:05.941 read: IOPS=8762, BW=34.2MiB/s (35.9MB/s)(68.7MiB/2007msec) 00:25:05.941 slat (nsec): min=1943, max=152758, avg=2587.15, stdev=1839.13 00:25:05.941 clat (usec): min=2657, max=14594, avg=7975.98, stdev=671.85 00:25:05.941 lat (usec): min=2682, max=14596, avg=7978.56, stdev=671.76 00:25:05.941 clat percentiles (usec): 00:25:05.941 | 1.00th=[ 6521], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7439], 00:25:05.941 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8160], 00:25:05.941 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:25:05.941 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[12387], 99.95th=[13829], 00:25:05.941 | 99.99th=[14615] 00:25:05.941 bw ( KiB/s): min=33808, max=35688, per=100.00%, avg=35048.00, stdev=842.83, samples=4 00:25:05.942 iops : min= 8452, max= 8922, avg=8762.00, stdev=210.71, samples=4 00:25:05.942 write: IOPS=8770, BW=34.3MiB/s (35.9MB/s)(68.8MiB/2007msec); 0 zone resets 00:25:05.942 slat (usec): min=2, max=133, avg= 2.71, stdev= 1.41 00:25:05.942 clat (usec): min=1313, max=12432, avg=6565.93, stdev=555.13 00:25:05.942 lat (usec): min=1321, max=12434, avg=6568.65, stdev=555.09 00:25:05.942 clat percentiles (usec): 00:25:05.942 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:25:05.942 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:25:05.942 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:25:05.942 | 99.00th=[ 7767], 99.50th=[ 7898], 99.90th=[10683], 99.95th=[11469], 00:25:05.942 | 99.99th=[12387] 00:25:05.942 bw ( KiB/s): min=34752, max=35472, per=99.98%, avg=35076.00, stdev=342.01, samples=4 00:25:05.942 iops : min= 8688, max= 8868, avg=8769.00, stdev=85.50, samples=4 00:25:05.942 lat (msec) : 2=0.02%, 4=0.11%, 10=99.68%, 20=0.19% 00:25:05.942 cpu : usr=65.55%, sys=32.75%, ctx=75, majf=0, minf=31 00:25:05.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:05.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:05.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:05.942 issued rwts: total=17586,17603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:05.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:05.942 00:25:05.942 Run status group 0 (all jobs): 00:25:05.942 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.7MiB (72.0MB), run=2007-2007msec 00:25:05.942 WRITE: bw=34.3MiB/s (35.9MB/s), 34.3MiB/s-34.3MiB/s (35.9MB/s-35.9MB/s), io=68.8MiB (72.1MB), run=2007-2007msec 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:05.942 10:35:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:05.942 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:05.942 fio-3.35 00:25:05.942 Starting 1 thread 00:25:08.468 00:25:08.468 test: (groupid=0, jobs=1): err= 0: pid=2611520: Mon Dec 9 10:35:40 2024 00:25:08.468 read: IOPS=8230, BW=129MiB/s (135MB/s)(258MiB/2008msec) 00:25:08.468 slat (nsec): min=2832, max=92652, avg=3601.61, stdev=1529.14 00:25:08.468 clat (usec): min=2626, max=16052, avg=8778.69, stdev=1917.41 00:25:08.468 lat (usec): min=2630, max=16055, avg=8782.29, stdev=1917.43 00:25:08.468 clat percentiles (usec): 00:25:08.468 | 1.00th=[ 4686], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7177], 00:25:08.468 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9241], 00:25:08.468 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11338], 95.00th=[12125], 00:25:08.468 | 99.00th=[13566], 99.50th=[14091], 99.90th=[14877], 99.95th=[15139], 00:25:08.468 | 99.99th=[15926] 00:25:08.468 bw ( KiB/s): min=61120, max=77728, per=52.73%, avg=69442.50, stdev=8043.95, samples=4 00:25:08.468 iops : min= 3820, max= 4858, avg=4340.00, stdev=502.61, samples=4 00:25:08.468 write: IOPS=4959, BW=77.5MiB/s (81.3MB/s)(142MiB/1837msec); 0 zone resets 00:25:08.468 slat (usec): min=30, max=153, avg=33.28, stdev= 4.83 00:25:08.468 clat (usec): min=6256, max=20417, avg=11674.88, stdev=1971.08 00:25:08.468 lat (usec): min=6286, max=20448, avg=11708.16, stdev=1971.18 00:25:08.468 clat percentiles (usec): 00:25:08.468 | 1.00th=[ 7767], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[ 9896], 00:25:08.468 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11600], 60.00th=[12125], 00:25:08.468 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14091], 95.00th=[15139], 00:25:08.468 | 99.00th=[17171], 99.50th=[17957], 99.90th=[19530], 99.95th=[20317], 00:25:08.468 | 99.99th=[20317] 00:25:08.468 bw ( KiB/s): min=62304, max=80224, per=91.00%, avg=72208.50, stdev=8628.26, samples=4 00:25:08.468 iops : min= 3894, max= 5014, avg=4513.00, stdev=539.24, samples=4 00:25:08.468 lat (msec) : 4=0.19%, 10=56.58%, 20=43.20%, 50=0.02% 00:25:08.468 cpu : usr=77.08%, sys=21.72%, ctx=45, majf=0, minf=57 00:25:08.468 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:08.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:08.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:08.468 issued rwts: total=16526,9110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:08.468 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:08.468 00:25:08.468 Run status group 0 (all jobs): 00:25:08.468 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=258MiB (271MB), run=2008-2008msec 00:25:08.468 WRITE: bw=77.5MiB/s (81.3MB/s), 77.5MiB/s-77.5MiB/s (81.3MB/s-81.3MB/s), io=142MiB (149MB), run=1837-1837msec 00:25:08.468 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:08.468 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:08.468 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:08.468 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:08.468 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:08.468 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:08.468 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:08.468 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:08.468 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:08.468 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:08.468 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:08.468 rmmod nvme_tcp 00:25:08.727 rmmod nvme_fabrics 00:25:08.727 rmmod nvme_keyring 00:25:08.727 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:08.727 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:08.727 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:08.727 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2610712 ']' 00:25:08.727 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2610712 00:25:08.727 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2610712 ']' 00:25:08.727 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2610712 00:25:08.727 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:08.727 10:35:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.727 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2610712 00:25:08.727 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:08.727 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:08.727 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2610712' 00:25:08.727 killing process with pid 2610712 00:25:08.727 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2610712 00:25:08.727 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2610712 00:25:08.987 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:08.987 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:08.987 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:08.987 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:08.987 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:08.987 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:08.987 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:08.987 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:08.987 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:08.987 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:08.987 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:08.988 10:35:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:11.520 00:25:11.520 real 0m12.470s 00:25:11.520 user 0m37.131s 00:25:11.520 sys 0m3.990s 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.520 ************************************ 00:25:11.520 END TEST nvmf_fio_host 00:25:11.520 ************************************ 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.520 ************************************ 00:25:11.520 START TEST nvmf_failover 00:25:11.520 ************************************ 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:11.520 * Looking for test storage... 00:25:11.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:11.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.520 --rc genhtml_branch_coverage=1 00:25:11.520 --rc genhtml_function_coverage=1 00:25:11.520 --rc genhtml_legend=1 00:25:11.520 --rc geninfo_all_blocks=1 00:25:11.520 --rc geninfo_unexecuted_blocks=1 00:25:11.520 00:25:11.520 ' 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:11.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.520 --rc genhtml_branch_coverage=1 00:25:11.520 --rc genhtml_function_coverage=1 00:25:11.520 --rc genhtml_legend=1 00:25:11.520 --rc geninfo_all_blocks=1 00:25:11.520 --rc geninfo_unexecuted_blocks=1 00:25:11.520 00:25:11.520 ' 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:11.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.520 --rc genhtml_branch_coverage=1 00:25:11.520 --rc genhtml_function_coverage=1 00:25:11.520 --rc genhtml_legend=1 00:25:11.520 --rc geninfo_all_blocks=1 00:25:11.520 --rc geninfo_unexecuted_blocks=1 00:25:11.520 00:25:11.520 ' 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:11.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.520 --rc genhtml_branch_coverage=1 00:25:11.520 --rc genhtml_function_coverage=1 00:25:11.520 --rc genhtml_legend=1 00:25:11.520 --rc geninfo_all_blocks=1 00:25:11.520 --rc geninfo_unexecuted_blocks=1 00:25:11.520 00:25:11.520 ' 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.520 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:11.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:11.521 10:35:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:13.421 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:13.422 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:13.422 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:13.422 Found net devices under 0000:09:00.0: cvl_0_0 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:13.422 Found net devices under 0000:09:00.1: cvl_0_1 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.422 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:13.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:25:13.679 00:25:13.679 --- 10.0.0.2 ping statistics --- 00:25:13.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.679 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:25:13.679 00:25:13.679 --- 10.0.0.1 ping statistics --- 00:25:13.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.679 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2613834 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2613834 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2613834 ']' 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.679 10:35:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.679 [2024-12-09 10:35:45.958557] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:25:13.679 [2024-12-09 10:35:45.958646] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.679 [2024-12-09 10:35:46.032762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:13.679 [2024-12-09 10:35:46.087349] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.679 [2024-12-09 10:35:46.087419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.679 [2024-12-09 10:35:46.087440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:13.679 [2024-12-09 10:35:46.087451] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:13.679 [2024-12-09 10:35:46.087460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.679 [2024-12-09 10:35:46.088829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.679 [2024-12-09 10:35:46.088888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.679 [2024-12-09 10:35:46.088892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.935 10:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.935 10:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:13.935 10:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:13.935 10:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:13.935 10:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:13.935 10:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:13.935 10:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:14.192 [2024-12-09 10:35:46.478102] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.192 10:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:14.498 Malloc0 00:25:14.498 10:35:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.788 10:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:15.046 10:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.303 [2024-12-09 10:35:47.690874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.303 10:35:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:15.867 [2024-12-09 10:35:48.015827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:15.867 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:16.124 [2024-12-09 10:35:48.344826] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:16.124 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2614133 00:25:16.124 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:16.124 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.124 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2614133 /var/tmp/bdevperf.sock 00:25:16.124 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2614133 ']' 00:25:16.124 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:16.124 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:16.124 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:16.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:16.124 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:16.124 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.382 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.382 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:16.382 10:35:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:16.639 NVMe0n1 00:25:16.639 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:17.204 00:25:17.204 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2614268 00:25:17.204 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:17.204 10:35:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:18.142 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.400 [2024-12-09 10:35:50.661806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.661993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662016] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.400 [2024-12-09 10:35:50.662258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 [2024-12-09 10:35:50.662661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2399e00 is same with the state(6) to be set 00:25:18.401 10:35:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:21.680 10:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:21.680 00:25:21.680 10:35:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:21.937 [2024-12-09 10:35:54.366962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239a8b0 is same with the state(6) to be set 00:25:21.937 [2024-12-09 10:35:54.367024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239a8b0 is same with the state(6) to be set 00:25:21.937 [2024-12-09 10:35:54.367039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239a8b0 is same with the state(6) to be set 00:25:21.937 [2024-12-09 10:35:54.367051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239a8b0 is same with the state(6) to be set 00:25:21.937 [2024-12-09 10:35:54.367063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239a8b0 is same with the state(6) to be set 00:25:21.937 [2024-12-09 10:35:54.367075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239a8b0 is same with the state(6) to be set 00:25:21.937 [2024-12-09 10:35:54.367087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239a8b0 is same with the state(6) to be set 00:25:21.937 [2024-12-09 10:35:54.367120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239a8b0 is same with the state(6) to be set 00:25:22.195 10:35:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:25.526 10:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:25.526 [2024-12-09 10:35:57.672320] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:25.526 10:35:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:26.459 10:35:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:26.718 [2024-12-09 10:35:58.949867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.949933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.949957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.949969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.949980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.949991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.718 [2024-12-09 10:35:58.950338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 [2024-12-09 10:35:58.950882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225fee0 is same with the state(6) to be set 00:25:26.719 10:35:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2614268 00:25:33.302 { 00:25:33.302 "results": [ 00:25:33.302 { 00:25:33.302 "job": "NVMe0n1", 00:25:33.302 "core_mask": "0x1", 00:25:33.302 "workload": "verify", 00:25:33.302 "status": "finished", 00:25:33.302 "verify_range": { 00:25:33.302 "start": 0, 00:25:33.302 "length": 16384 00:25:33.302 }, 00:25:33.302 "queue_depth": 128, 00:25:33.302 "io_size": 4096, 00:25:33.302 "runtime": 15.013086, 00:25:33.302 "iops": 8266.588228429518, 00:25:33.302 "mibps": 32.29136026730281, 00:25:33.302 "io_failed": 11724, 00:25:33.302 "io_timeout": 0, 00:25:33.302 "avg_latency_us": 14119.173233797881, 00:25:33.302 "min_latency_us": 825.2681481481482, 00:25:33.302 "max_latency_us": 16311.182222222222 00:25:33.302 } 00:25:33.302 ], 00:25:33.302 "core_count": 1 00:25:33.302 } 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2614133 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2614133 ']' 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2614133 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2614133 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2614133' 00:25:33.302 killing process with pid 2614133 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2614133 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2614133 00:25:33.302 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:33.302 [2024-12-09 10:35:48.412777] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:25:33.302 [2024-12-09 10:35:48.412860] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2614133 ] 00:25:33.302 [2024-12-09 10:35:48.481824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.302 [2024-12-09 10:35:48.541010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.302 Running I/O for 15 seconds... 00:25:33.302 8241.00 IOPS, 32.19 MiB/s [2024-12-09T09:36:05.743Z] [2024-12-09 10:35:50.664535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.302 [2024-12-09 10:35:50.664575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.302 [2024-12-09 10:35:50.664602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.302 [2024-12-09 10:35:50.664617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.302 [2024-12-09 10:35:50.664634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.664972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.664987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.303 [2024-12-09 10:35:50.665552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.303 [2024-12-09 10:35:50.665581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.303 [2024-12-09 10:35:50.665610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.303 [2024-12-09 10:35:50.665637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.303 [2024-12-09 10:35:50.665665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.303 [2024-12-09 10:35:50.665698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.303 [2024-12-09 10:35:50.665727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.303 [2024-12-09 10:35:50.665754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.303 [2024-12-09 10:35:50.665782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.303 [2024-12-09 10:35:50.665809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.303 [2024-12-09 10:35:50.665824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.303 [2024-12-09 10:35:50.665837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.665852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.665865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.665879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.665893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.665907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.665920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.665935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.665949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.665964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.665977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.665991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.666981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.666996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.304 [2024-12-09 10:35:50.667009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.304 [2024-12-09 10:35:50.667024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.305 [2024-12-09 10:35:50.667500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.667984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.667998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.668011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.668026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.668039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.668054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.668066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.668081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.668094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.668108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.668121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.668136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.668174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.668191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.668204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.668219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.305 [2024-12-09 10:35:50.668233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.305 [2024-12-09 10:35:50.668248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.306 [2024-12-09 10:35:50.668261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:50.668276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.306 [2024-12-09 10:35:50.668289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:50.668303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.306 [2024-12-09 10:35:50.668316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:50.668331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.306 [2024-12-09 10:35:50.668350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:50.668365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.306 [2024-12-09 10:35:50.668378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:50.668393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.306 [2024-12-09 10:35:50.668406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:50.668448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.306 [2024-12-09 10:35:50.668477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.306 [2024-12-09 10:35:50.668489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80048 len:8 PRP1 0x0 PRP2 0x0 00:25:33.306 [2024-12-09 10:35:50.668507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:50.668569] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:33.306 [2024-12-09 10:35:50.668620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.306 [2024-12-09 10:35:50.668640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:50.668655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.306 [2024-12-09 10:35:50.668668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:50.668682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.306 [2024-12-09 10:35:50.668695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:50.668709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.306 [2024-12-09 10:35:50.668722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:50.668735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:33.306 [2024-12-09 10:35:50.672089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:33.306 [2024-12-09 10:35:50.672135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b0180 (9): Bad file descriptor 00:25:33.306 [2024-12-09 10:35:50.737785] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:33.306 8043.50 IOPS, 31.42 MiB/s [2024-12-09T09:36:05.747Z] 8188.00 IOPS, 31.98 MiB/s [2024-12-09T09:36:05.747Z] 8257.00 IOPS, 32.25 MiB/s [2024-12-09T09:36:05.747Z] [2024-12-09 10:35:54.367570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.306 [2024-12-09 10:35:54.367616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.367633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.306 [2024-12-09 10:35:54.367648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.367669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.306 [2024-12-09 10:35:54.367684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.367698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.306 [2024-12-09 10:35:54.367711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.367724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b0180 is same with the state(6) to be set 00:25:33.306 [2024-12-09 10:35:54.367778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.367799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.367824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.367839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.367855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.367869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.367885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.367898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.367929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.367943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.367958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.367971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.367986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.367999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.306 [2024-12-09 10:35:54.368417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.306 [2024-12-09 10:35:54.368431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.368980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.368994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.307 [2024-12-09 10:35:54.369419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:83800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.307 [2024-12-09 10:35:54.369432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.369977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.369991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:84008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:84032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:84040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:84064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:84080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:84096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:84104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.308 [2024-12-09 10:35:54.370585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.308 [2024-12-09 10:35:54.370600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.370974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.370988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:84368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.309 [2024-12-09 10:35:54.371557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.309 [2024-12-09 10:35:54.371603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.309 [2024-12-09 10:35:54.371615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84384 len:8 PRP1 0x0 PRP2 0x0 00:25:33.309 [2024-12-09 10:35:54.371628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:54.371696] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:33.309 [2024-12-09 10:35:54.371715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:33.309 [2024-12-09 10:35:54.375034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:33.309 [2024-12-09 10:35:54.375076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b0180 (9): Bad file descriptor 00:25:33.309 8087.60 IOPS, 31.59 MiB/s [2024-12-09T09:36:05.750Z] [2024-12-09 10:35:54.525424] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:33.309 8083.00 IOPS, 31.57 MiB/s [2024-12-09T09:36:05.750Z] 8110.43 IOPS, 31.68 MiB/s [2024-12-09T09:36:05.750Z] 8148.50 IOPS, 31.83 MiB/s [2024-12-09T09:36:05.750Z] 8186.00 IOPS, 31.98 MiB/s [2024-12-09T09:36:05.750Z] [2024-12-09 10:35:58.950259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.309 [2024-12-09 10:35:58.950300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:58.950318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.309 [2024-12-09 10:35:58.950333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.309 [2024-12-09 10:35:58.950347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.310 [2024-12-09 10:35:58.950360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.950375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:33.310 [2024-12-09 10:35:58.950389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.950402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b0180 is same with the state(6) to be set 00:25:33.310 [2024-12-09 10:35:58.952637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:35344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.952679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.952707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.952722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.952737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.952751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.952765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.952778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.952793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:35376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.952807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.952822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:35384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.952835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.952850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.952864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.952879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.952892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.952907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:35408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.952921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.952935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.952948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.952963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.952975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.952990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:35440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:35456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:35480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:35488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:35496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:35536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:35552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:35560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:35576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:35592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:35600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.310 [2024-12-09 10:35:58.953714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.310 [2024-12-09 10:35:58.953742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:35648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.310 [2024-12-09 10:35:58.953769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.310 [2024-12-09 10:35:58.953784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.310 [2024-12-09 10:35:58.953798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.953812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.953828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.953843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:35672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.953857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.953872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.953885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.953899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.953912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.953927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.953940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.953955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:35704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.953968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.953982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:35712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.953996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:35720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:35744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:35752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:35784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:35792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:35800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:35808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:35816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:35832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:35840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:35848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:35880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:35896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:35928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.311 [2024-12-09 10:35:58.954831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:35936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.311 [2024-12-09 10:35:58.954844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.954858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:35944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.954871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.954885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.954899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.954914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:35960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.954927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.954945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:35968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.954958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.954973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:35976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.954986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:36040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:36048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:36072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.312 [2024-12-09 10:35:58.955416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:33.312 [2024-12-09 10:35:58.955458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.955974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.955988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.956001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.312 [2024-12-09 10:35:58.956016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:36240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.312 [2024-12-09 10:35:58.956029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:36248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:36272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:33.313 [2024-12-09 10:35:58.956409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.313 [2024-12-09 10:35:58.956471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36344 len:8 PRP1 0x0 PRP2 0x0 00:25:33.313 [2024-12-09 10:35:58.956484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.313 [2024-12-09 10:35:58.956517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.313 [2024-12-09 10:35:58.956528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36352 len:8 PRP1 0x0 PRP2 0x0 00:25:33.313 [2024-12-09 10:35:58.956541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:33.313 [2024-12-09 10:35:58.956564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:33.313 [2024-12-09 10:35:58.956575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36360 len:8 PRP1 0x0 PRP2 0x0 00:25:33.313 [2024-12-09 10:35:58.956587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:33.313 [2024-12-09 10:35:58.956652] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:33.313 [2024-12-09 10:35:58.956671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:33.313 [2024-12-09 10:35:58.959968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:33.313 [2024-12-09 10:35:58.960008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16b0180 (9): Bad file descriptor 00:25:33.313 [2024-12-09 10:35:59.029376] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:33.313 8142.90 IOPS, 31.81 MiB/s [2024-12-09T09:36:05.754Z] 8166.09 IOPS, 31.90 MiB/s [2024-12-09T09:36:05.754Z] 8190.08 IOPS, 31.99 MiB/s [2024-12-09T09:36:05.754Z] 8215.46 IOPS, 32.09 MiB/s [2024-12-09T09:36:05.754Z] 8241.50 IOPS, 32.19 MiB/s [2024-12-09T09:36:05.754Z] 8265.87 IOPS, 32.29 MiB/s 00:25:33.313 Latency(us) 00:25:33.313 [2024-12-09T09:36:05.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.313 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:33.313 Verification LBA range: start 0x0 length 0x4000 00:25:33.313 NVMe0n1 : 15.01 8266.59 32.29 780.92 0.00 14119.17 825.27 16311.18 00:25:33.313 [2024-12-09T09:36:05.754Z] =================================================================================================================== 00:25:33.313 [2024-12-09T09:36:05.754Z] Total : 8266.59 32.29 780.92 0.00 14119.17 825.27 16311.18 00:25:33.313 Received shutdown signal, test time was about 15.000000 seconds 00:25:33.313 00:25:33.313 Latency(us) 00:25:33.313 [2024-12-09T09:36:05.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.313 [2024-12-09T09:36:05.754Z] =================================================================================================================== 00:25:33.313 [2024-12-09T09:36:05.754Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2616110 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2616110 /var/tmp/bdevperf.sock 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2616110 ']' 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:33.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.313 10:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:33.313 10:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.313 10:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:25:33.313 10:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:33.313 [2024-12-09 10:36:05.398801] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:33.313 10:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:33.313 [2024-12-09 10:36:05.663546] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:33.313 10:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:33.877 NVMe0n1 00:25:33.877 10:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:34.442 00:25:34.442 10:36:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:34.699 00:25:34.699 10:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:34.699 10:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:35.263 10:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:35.520 10:36:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:38.801 10:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:38.801 10:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:38.801 10:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2617401 00:25:38.801 10:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:38.801 10:36:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2617401 00:25:39.734 { 00:25:39.734 "results": [ 00:25:39.734 { 00:25:39.734 "job": "NVMe0n1", 00:25:39.734 "core_mask": "0x1", 00:25:39.734 "workload": "verify", 00:25:39.734 "status": "finished", 00:25:39.734 "verify_range": { 00:25:39.735 "start": 0, 00:25:39.735 "length": 16384 00:25:39.735 }, 00:25:39.735 "queue_depth": 128, 00:25:39.735 "io_size": 4096, 00:25:39.735 "runtime": 1.005388, 00:25:39.735 "iops": 8082.451749971156, 00:25:39.735 "mibps": 31.572077148324826, 00:25:39.735 "io_failed": 0, 00:25:39.735 "io_timeout": 0, 00:25:39.735 "avg_latency_us": 15761.928564370426, 00:25:39.735 "min_latency_us": 691.7688888888889, 00:25:39.735 "max_latency_us": 14175.194074074074 00:25:39.735 } 00:25:39.735 ], 00:25:39.735 "core_count": 1 00:25:39.735 } 00:25:39.735 10:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:39.735 [2024-12-09 10:36:04.896386] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:25:39.735 [2024-12-09 10:36:04.896501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2616110 ] 00:25:39.735 [2024-12-09 10:36:04.965781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.735 [2024-12-09 10:36:05.022840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.735 [2024-12-09 10:36:07.693857] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:39.735 [2024-12-09 10:36:07.693950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.735 [2024-12-09 10:36:07.693973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.735 [2024-12-09 10:36:07.693991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.735 [2024-12-09 10:36:07.694020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.735 [2024-12-09 10:36:07.694035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.735 [2024-12-09 10:36:07.694055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.735 [2024-12-09 10:36:07.694070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:39.735 [2024-12-09 10:36:07.694084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:39.735 [2024-12-09 10:36:07.694098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:39.735 [2024-12-09 10:36:07.694163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:39.735 [2024-12-09 10:36:07.694208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1590180 (9): Bad file descriptor 00:25:39.735 [2024-12-09 10:36:07.745331] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:39.735 Running I/O for 1 seconds... 00:25:39.735 7998.00 IOPS, 31.24 MiB/s 00:25:39.735 Latency(us) 00:25:39.735 [2024-12-09T09:36:12.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.735 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:39.735 Verification LBA range: start 0x0 length 0x4000 00:25:39.735 NVMe0n1 : 1.01 8082.45 31.57 0.00 0.00 15761.93 691.77 14175.19 00:25:39.735 [2024-12-09T09:36:12.176Z] =================================================================================================================== 00:25:39.735 [2024-12-09T09:36:12.176Z] Total : 8082.45 31.57 0.00 0.00 15761.93 691.77 14175.19 00:25:39.735 10:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:39.735 10:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:39.992 10:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:40.249 10:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:40.249 10:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:40.508 10:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:41.074 10:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2616110 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2616110 ']' 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2616110 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2616110 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2616110' 00:25:44.353 killing process with pid 2616110 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2616110 00:25:44.353 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2616110 00:25:44.610 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:44.610 10:36:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:44.866 rmmod nvme_tcp 00:25:44.866 rmmod nvme_fabrics 00:25:44.866 rmmod nvme_keyring 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:44.866 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2613834 ']' 00:25:44.867 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2613834 00:25:44.867 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2613834 ']' 00:25:44.867 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2613834 00:25:44.867 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:25:44.867 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.867 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2613834 00:25:44.867 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:44.867 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:44.867 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2613834' 00:25:44.867 killing process with pid 2613834 00:25:44.867 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2613834 00:25:44.867 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2613834 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:45.123 10:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:47.661 00:25:47.661 real 0m36.131s 00:25:47.661 user 2m7.459s 00:25:47.661 sys 0m5.978s 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:47.661 ************************************ 00:25:47.661 END TEST nvmf_failover 00:25:47.661 ************************************ 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.661 ************************************ 00:25:47.661 START TEST nvmf_host_discovery 00:25:47.661 ************************************ 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:47.661 * Looking for test storage... 00:25:47.661 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.661 --rc genhtml_branch_coverage=1 00:25:47.661 --rc genhtml_function_coverage=1 00:25:47.661 --rc genhtml_legend=1 00:25:47.661 --rc geninfo_all_blocks=1 00:25:47.661 --rc geninfo_unexecuted_blocks=1 00:25:47.661 00:25:47.661 ' 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.661 --rc genhtml_branch_coverage=1 00:25:47.661 --rc genhtml_function_coverage=1 00:25:47.661 --rc genhtml_legend=1 00:25:47.661 --rc geninfo_all_blocks=1 00:25:47.661 --rc geninfo_unexecuted_blocks=1 00:25:47.661 00:25:47.661 ' 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.661 --rc genhtml_branch_coverage=1 00:25:47.661 --rc genhtml_function_coverage=1 00:25:47.661 --rc genhtml_legend=1 00:25:47.661 --rc geninfo_all_blocks=1 00:25:47.661 --rc geninfo_unexecuted_blocks=1 00:25:47.661 00:25:47.661 ' 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:47.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.661 --rc genhtml_branch_coverage=1 00:25:47.661 --rc genhtml_function_coverage=1 00:25:47.661 --rc genhtml_legend=1 00:25:47.661 --rc geninfo_all_blocks=1 00:25:47.661 --rc geninfo_unexecuted_blocks=1 00:25:47.661 00:25:47.661 ' 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:47.661 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:47.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:47.662 10:36:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.565 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:49.566 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:49.566 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:49.566 Found net devices under 0000:09:00.0: cvl_0_0 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:49.566 Found net devices under 0000:09:00.1: cvl_0_1 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:49.566 10:36:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:49.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:25:49.826 00:25:49.826 --- 10.0.0.2 ping statistics --- 00:25:49.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.826 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:25:49.826 00:25:49.826 --- 10.0.0.1 ping statistics --- 00:25:49.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.826 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2620085 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2620085 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2620085 ']' 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.826 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:49.826 [2024-12-09 10:36:22.157137] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:25:49.826 [2024-12-09 10:36:22.157215] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.826 [2024-12-09 10:36:22.230739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.085 [2024-12-09 10:36:22.290284] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.085 [2024-12-09 10:36:22.290341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.085 [2024-12-09 10:36:22.290355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.085 [2024-12-09 10:36:22.290366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.085 [2024-12-09 10:36:22.290375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.085 [2024-12-09 10:36:22.290991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.085 [2024-12-09 10:36:22.441994] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.085 [2024-12-09 10:36:22.450217] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.085 null0 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.085 null1 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2620164 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2620164 /tmp/host.sock 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2620164 ']' 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:50.085 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.085 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.344 [2024-12-09 10:36:22.527883] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:25:50.344 [2024-12-09 10:36:22.527960] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2620164 ] 00:25:50.344 [2024-12-09 10:36:22.592689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.344 [2024-12-09 10:36:22.651526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.344 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.602 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.603 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.603 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:50.603 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.603 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.603 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.603 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.603 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.603 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.603 10:36:22 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.603 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:50.603 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:50.603 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.603 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.603 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.603 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.603 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.603 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.603 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.862 [2024-12-09 10:36:23.055765] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:50.862 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:25:50.863 10:36:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:51.429 [2024-12-09 10:36:23.789168] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:51.429 [2024-12-09 10:36:23.789213] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:51.429 [2024-12-09 10:36:23.789238] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:51.687 [2024-12-09 10:36:23.876499] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:51.687 [2024-12-09 10:36:23.938227] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:51.687 [2024-12-09 10:36:23.939253] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11c9aa0:1 started. 00:25:51.687 [2024-12-09 10:36:23.941032] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:51.687 [2024-12-09 10:36:23.941054] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:51.687 [2024-12-09 10:36:23.947514] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11c9aa0 was disconnected and freed. delete nvme_qpair. 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:51.946 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.204 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:52.204 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.205 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.463 [2024-12-09 10:36:24.668219] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x11c9c80:1 started. 00:25:52.463 [2024-12-09 10:36:24.679303] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x11c9c80 was disconnected and freed. delete nvme_qpair. 00:25:52.463 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.463 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:52.463 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.463 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:52.463 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:52.463 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:52.463 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:52.463 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.463 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.463 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.464 [2024-12-09 10:36:24.732791] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:52.464 [2024-12-09 10:36:24.733900] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:52.464 [2024-12-09 10:36:24.733944] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:52.464 [2024-12-09 10:36:24.820667] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:52.464 10:36:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:25:52.464 [2024-12-09 10:36:24.884445] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:52.464 [2024-12-09 10:36:24.884492] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:52.464 [2024-12-09 10:36:24.884522] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:52.464 [2024-12-09 10:36:24.884530] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.840 [2024-12-09 10:36:25.952517] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:53.840 [2024-12-09 10:36:25.952549] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:53.840 [2024-12-09 10:36:25.952981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.840 [2024-12-09 10:36:25.953009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.840 [2024-12-09 10:36:25.953048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.840 [2024-12-09 10:36:25.953072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.840 [2024-12-09 10:36:25.953111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.840 [2024-12-09 10:36:25.953134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.840 [2024-12-09 10:36:25.953167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.840 [2024-12-09 10:36:25.953210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.840 [2024-12-09 10:36:25.953235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a050 is same with the state(6) to be set 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.840 [2024-12-09 10:36:25.962972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119a050 (9): Bad file descriptor 00:25:53.840 10:36:25 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.840 [2024-12-09 10:36:25.973013] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.840 [2024-12-09 10:36:25.973037] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.840 [2024-12-09 10:36:25.973052] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.840 [2024-12-09 10:36:25.973061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.840 [2024-12-09 10:36:25.973095] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.840 [2024-12-09 10:36:25.973286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.840 [2024-12-09 10:36:25.973319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119a050 with addr=10.0.0.2, port=4420 00:25:53.840 [2024-12-09 10:36:25.973347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a050 is same with the state(6) to be set 00:25:53.840 [2024-12-09 10:36:25.973381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119a050 (9): Bad file descriptor 00:25:53.840 [2024-12-09 10:36:25.973446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.840 [2024-12-09 10:36:25.973474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.840 [2024-12-09 10:36:25.973498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.840 [2024-12-09 10:36:25.973537] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.840 [2024-12-09 10:36:25.973554] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.840 [2024-12-09 10:36:25.973568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.840 [2024-12-09 10:36:25.983145] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.840 [2024-12-09 10:36:25.983167] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.840 [2024-12-09 10:36:25.983176] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.840 [2024-12-09 10:36:25.983183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.840 [2024-12-09 10:36:25.983226] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.840 [2024-12-09 10:36:25.983409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.840 [2024-12-09 10:36:25.983441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119a050 with addr=10.0.0.2, port=4420 00:25:53.840 [2024-12-09 10:36:25.983468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a050 is same with the state(6) to be set 00:25:53.840 [2024-12-09 10:36:25.983515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119a050 (9): Bad file descriptor 00:25:53.840 [2024-12-09 10:36:25.983548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.840 [2024-12-09 10:36:25.983571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.840 [2024-12-09 10:36:25.983607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.840 [2024-12-09 10:36:25.983625] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.840 [2024-12-09 10:36:25.983639] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.840 [2024-12-09 10:36:25.983652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.840 [2024-12-09 10:36:25.993259] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.840 [2024-12-09 10:36:25.993280] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.840 [2024-12-09 10:36:25.993289] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.840 [2024-12-09 10:36:25.993296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.840 [2024-12-09 10:36:25.993324] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.840 [2024-12-09 10:36:25.993477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.840 [2024-12-09 10:36:25.993507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119a050 with addr=10.0.0.2, port=4420 00:25:53.840 [2024-12-09 10:36:25.993532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a050 is same with the state(6) to be set 00:25:53.840 [2024-12-09 10:36:25.993565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119a050 (9): Bad file descriptor 00:25:53.840 [2024-12-09 10:36:25.993613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.840 [2024-12-09 10:36:25.993639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.840 [2024-12-09 10:36:25.993661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.840 [2024-12-09 10:36:25.993680] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.840 [2024-12-09 10:36:25.993711] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.840 [2024-12-09 10:36:25.993733] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:53.840 [2024-12-09 10:36:26.003358] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.840 [2024-12-09 10:36:26.003383] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.840 [2024-12-09 10:36:26.003392] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.840 [2024-12-09 10:36:26.003399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.840 [2024-12-09 10:36:26.003427] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.840 [2024-12-09 10:36:26.003657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.840 [2024-12-09 10:36:26.003690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119a050 with addr=10.0.0.2, port=4420 00:25:53.840 [2024-12-09 10:36:26.003717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a050 is same with the state(6) to be set 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:53.840 [2024-12-09 10:36:26.003751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119a050 (9): Bad file descriptor 00:25:53.840 [2024-12-09 10:36:26.003785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.840 [2024-12-09 10:36:26.003809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.840 [2024-12-09 10:36:26.003834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.840 [2024-12-09 10:36:26.003869] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.840 [2024-12-09 10:36:26.003885] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.840 [2024-12-09 10:36:26.003898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.840 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.840 [2024-12-09 10:36:26.013461] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.840 [2024-12-09 10:36:26.013485] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.840 [2024-12-09 10:36:26.013500] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.840 [2024-12-09 10:36:26.013508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.840 [2024-12-09 10:36:26.013538] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.840 [2024-12-09 10:36:26.013728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.840 [2024-12-09 10:36:26.013760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119a050 with addr=10.0.0.2, port=4420 00:25:53.840 [2024-12-09 10:36:26.013788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a050 is same with the state(6) to be set 00:25:53.840 [2024-12-09 10:36:26.013823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119a050 (9): Bad file descriptor 00:25:53.840 [2024-12-09 10:36:26.013892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.841 [2024-12-09 10:36:26.013918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.841 [2024-12-09 10:36:26.013956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.841 [2024-12-09 10:36:26.013977] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.841 [2024-12-09 10:36:26.013992] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.841 [2024-12-09 10:36:26.014020] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.841 [2024-12-09 10:36:26.023571] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.841 [2024-12-09 10:36:26.023595] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.841 [2024-12-09 10:36:26.023605] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.841 [2024-12-09 10:36:26.023613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.841 [2024-12-09 10:36:26.023641] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.841 [2024-12-09 10:36:26.023812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.841 [2024-12-09 10:36:26.023844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119a050 with addr=10.0.0.2, port=4420 00:25:53.841 [2024-12-09 10:36:26.023872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a050 is same with the state(6) to be set 00:25:53.841 [2024-12-09 10:36:26.023907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119a050 (9): Bad file descriptor 00:25:53.841 [2024-12-09 10:36:26.023940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.841 [2024-12-09 10:36:26.023964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.841 [2024-12-09 10:36:26.023989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.841 [2024-12-09 10:36:26.024009] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.841 [2024-12-09 10:36:26.024025] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.841 [2024-12-09 10:36:26.024039] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.841 [2024-12-09 10:36:26.033674] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:53.841 [2024-12-09 10:36:26.033702] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:53.841 [2024-12-09 10:36:26.033713] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:53.841 [2024-12-09 10:36:26.033720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:53.841 [2024-12-09 10:36:26.033749] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:53.841 [2024-12-09 10:36:26.033952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.841 [2024-12-09 10:36:26.033983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x119a050 with addr=10.0.0.2, port=4420 00:25:53.841 [2024-12-09 10:36:26.034010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x119a050 is same with the state(6) to be set 00:25:53.841 [2024-12-09 10:36:26.034043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x119a050 (9): Bad file descriptor 00:25:53.841 [2024-12-09 10:36:26.034092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:53.841 [2024-12-09 10:36:26.034119] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:53.841 [2024-12-09 10:36:26.034169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:53.841 [2024-12-09 10:36:26.034208] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:53.841 [2024-12-09 10:36:26.034224] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:53.841 [2024-12-09 10:36:26.034238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:53.841 [2024-12-09 10:36:26.038477] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:53.841 [2024-12-09 10:36:26.038507] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.841 10:36:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.315 [2024-12-09 10:36:27.309761] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:55.315 [2024-12-09 10:36:27.309789] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:55.315 [2024-12-09 10:36:27.309812] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:55.315 [2024-12-09 10:36:27.396092] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:55.315 [2024-12-09 10:36:27.664448] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:55.315 [2024-12-09 10:36:27.665271] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1300dd0:1 started. 00:25:55.315 [2024-12-09 10:36:27.667414] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:55.315 [2024-12-09 10:36:27.667469] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:55.315 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.315 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.315 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:55.315 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.315 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.315 [2024-12-09 10:36:27.669103] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1300dd0 was disconnected and freed. delete nvme_qpair. 00:25:55.315 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.315 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.315 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.315 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.315 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.315 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.315 request: 00:25:55.315 { 00:25:55.315 "name": "nvme", 00:25:55.315 "trtype": "tcp", 00:25:55.315 "traddr": "10.0.0.2", 00:25:55.315 "adrfam": "ipv4", 00:25:55.315 "trsvcid": "8009", 00:25:55.315 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:55.315 "wait_for_attach": true, 00:25:55.315 "method": "bdev_nvme_start_discovery", 00:25:55.315 "req_id": 1 00:25:55.315 } 00:25:55.315 Got JSON-RPC error response 00:25:55.315 response: 00:25:55.315 { 00:25:55.316 "code": -17, 00:25:55.316 "message": "File exists" 00:25:55.316 } 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.316 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.575 request: 00:25:55.575 { 00:25:55.575 "name": "nvme_second", 00:25:55.575 "trtype": "tcp", 00:25:55.575 "traddr": "10.0.0.2", 00:25:55.575 "adrfam": "ipv4", 00:25:55.575 "trsvcid": "8009", 00:25:55.575 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:55.575 "wait_for_attach": true, 00:25:55.575 "method": "bdev_nvme_start_discovery", 00:25:55.575 "req_id": 1 00:25:55.575 } 00:25:55.575 Got JSON-RPC error response 00:25:55.575 response: 00:25:55.575 { 00:25:55.575 "code": -17, 00:25:55.575 "message": "File exists" 00:25:55.575 } 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.575 10:36:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.507 [2024-12-09 10:36:28.862842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.507 [2024-12-09 10:36:28.862891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11aad40 with addr=10.0.0.2, port=8010 00:25:56.507 [2024-12-09 10:36:28.862931] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:56.507 [2024-12-09 10:36:28.862954] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:56.507 [2024-12-09 10:36:28.862974] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:57.439 [2024-12-09 10:36:29.865306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.439 [2024-12-09 10:36:29.865373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11aad40 with addr=10.0.0.2, port=8010 00:25:57.439 [2024-12-09 10:36:29.865413] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:57.439 [2024-12-09 10:36:29.865436] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:57.439 [2024-12-09 10:36:29.865457] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:58.818 [2024-12-09 10:36:30.867460] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:58.818 request: 00:25:58.818 { 00:25:58.818 "name": "nvme_second", 00:25:58.818 "trtype": "tcp", 00:25:58.818 "traddr": "10.0.0.2", 00:25:58.818 "adrfam": "ipv4", 00:25:58.818 "trsvcid": "8010", 00:25:58.818 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:58.818 "wait_for_attach": false, 00:25:58.818 "attach_timeout_ms": 3000, 00:25:58.818 "method": "bdev_nvme_start_discovery", 00:25:58.818 "req_id": 1 00:25:58.818 } 00:25:58.818 Got JSON-RPC error response 00:25:58.818 response: 00:25:58.818 { 00:25:58.818 "code": -110, 00:25:58.818 "message": "Connection timed out" 00:25:58.818 } 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2620164 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:58.818 rmmod nvme_tcp 00:25:58.818 rmmod nvme_fabrics 00:25:58.818 rmmod nvme_keyring 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2620085 ']' 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2620085 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2620085 ']' 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2620085 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:58.818 10:36:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2620085 00:25:58.818 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:58.818 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:58.818 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2620085' 00:25:58.818 killing process with pid 2620085 00:25:58.819 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2620085 00:25:58.819 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2620085 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.079 10:36:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.984 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:00.984 00:26:00.984 real 0m13.717s 00:26:00.984 user 0m19.716s 00:26:00.984 sys 0m2.976s 00:26:00.984 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.984 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.984 ************************************ 00:26:00.984 END TEST nvmf_host_discovery 00:26:00.984 ************************************ 00:26:00.984 10:36:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:00.984 10:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:00.984 10:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.984 10:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:00.984 ************************************ 00:26:00.984 START TEST nvmf_host_multipath_status 00:26:00.984 ************************************ 00:26:00.984 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:01.243 * Looking for test storage... 00:26:01.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:01.243 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:01.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.244 --rc genhtml_branch_coverage=1 00:26:01.244 --rc genhtml_function_coverage=1 00:26:01.244 --rc genhtml_legend=1 00:26:01.244 --rc geninfo_all_blocks=1 00:26:01.244 --rc geninfo_unexecuted_blocks=1 00:26:01.244 00:26:01.244 ' 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:01.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.244 --rc genhtml_branch_coverage=1 00:26:01.244 --rc genhtml_function_coverage=1 00:26:01.244 --rc genhtml_legend=1 00:26:01.244 --rc geninfo_all_blocks=1 00:26:01.244 --rc geninfo_unexecuted_blocks=1 00:26:01.244 00:26:01.244 ' 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:01.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.244 --rc genhtml_branch_coverage=1 00:26:01.244 --rc genhtml_function_coverage=1 00:26:01.244 --rc genhtml_legend=1 00:26:01.244 --rc geninfo_all_blocks=1 00:26:01.244 --rc geninfo_unexecuted_blocks=1 00:26:01.244 00:26:01.244 ' 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:01.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:01.244 --rc genhtml_branch_coverage=1 00:26:01.244 --rc genhtml_function_coverage=1 00:26:01.244 --rc genhtml_legend=1 00:26:01.244 --rc geninfo_all_blocks=1 00:26:01.244 --rc geninfo_unexecuted_blocks=1 00:26:01.244 00:26:01.244 ' 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:01.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:01.244 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:01.245 10:36:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:03.777 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:03.778 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:03.778 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:03.778 Found net devices under 0000:09:00.0: cvl_0_0 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:03.778 Found net devices under 0000:09:00.1: cvl_0_1 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:03.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:03.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:26:03.778 00:26:03.778 --- 10.0.0.2 ping statistics --- 00:26:03.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.778 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:03.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:03.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:26:03.778 00:26:03.778 --- 10.0.0.1 ping statistics --- 00:26:03.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:03.778 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2623290 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2623290 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2623290 ']' 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.778 10:36:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:03.778 [2024-12-09 10:36:35.914593] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:26:03.778 [2024-12-09 10:36:35.914677] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:03.779 [2024-12-09 10:36:35.982954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:03.779 [2024-12-09 10:36:36.035907] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:03.779 [2024-12-09 10:36:36.035963] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:03.779 [2024-12-09 10:36:36.035990] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:03.779 [2024-12-09 10:36:36.036001] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:03.779 [2024-12-09 10:36:36.036010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:03.779 [2024-12-09 10:36:36.037452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.779 [2024-12-09 10:36:36.037458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:03.779 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:03.779 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:03.779 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:03.779 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:03.779 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:03.779 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:03.779 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2623290 00:26:03.779 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:04.036 [2024-12-09 10:36:36.421898] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.036 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:04.294 Malloc0 00:26:04.552 10:36:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:04.809 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:05.066 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:05.322 [2024-12-09 10:36:37.533097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:05.322 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:05.580 [2024-12-09 10:36:37.797787] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:05.580 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2623499 00:26:05.580 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:05.580 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:05.580 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2623499 /var/tmp/bdevperf.sock 00:26:05.580 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2623499 ']' 00:26:05.580 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:05.580 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.580 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:05.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:05.580 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.580 10:36:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:05.837 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:05.837 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:26:05.837 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:06.095 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:06.658 Nvme0n1 00:26:06.658 10:36:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:06.915 Nvme0n1 00:26:06.915 10:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:06.915 10:36:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:09.441 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:09.441 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:09.442 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:09.699 10:36:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:10.633 10:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:10.633 10:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:10.633 10:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.633 10:36:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.891 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:10.891 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:10.891 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.891 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:11.150 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:11.150 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.150 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.150 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.408 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.408 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.408 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.408 10:36:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.666 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.666 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.666 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.666 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.924 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.924 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.924 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.924 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:12.182 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:12.182 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:12.182 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:12.748 10:36:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:12.748 10:36:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:14.117 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:14.117 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:14.117 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.117 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:14.117 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:14.117 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:14.117 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.117 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:14.373 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.373 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:14.373 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.373 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:14.629 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.629 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:14.629 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.629 10:36:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:14.887 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.887 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:14.887 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.887 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:15.144 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.144 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:15.144 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.144 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:15.401 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.401 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:15.401 10:36:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:15.659 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:15.916 10:36:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:17.286 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:17.286 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:17.286 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.286 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:17.286 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.286 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:17.286 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.286 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:17.544 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:17.544 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:17.544 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.544 10:36:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:17.805 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:17.805 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:17.805 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:17.805 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:18.063 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.063 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:18.063 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.063 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:18.321 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.321 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:18.321 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.321 10:36:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:18.579 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.579 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:18.579 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:19.146 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:19.146 10:36:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:20.519 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:20.519 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:20.519 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.519 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:20.519 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:20.519 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:20.519 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.519 10:36:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:20.776 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:20.776 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:20.776 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.776 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.033 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.033 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.033 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.033 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.290 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.290 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.290 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.290 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.547 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.547 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:21.547 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.547 10:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.804 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.805 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:21.805 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:22.062 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:22.319 10:36:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:23.688 10:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:23.688 10:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:23.688 10:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.688 10:36:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.688 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.688 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:23.688 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.688 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.945 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.945 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.945 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.945 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:24.203 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.203 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:24.203 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.203 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:24.462 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:24.462 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:24.462 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.462 10:36:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:24.720 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.720 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:24.720 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:24.720 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:24.977 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:24.977 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:24.977 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:25.234 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:25.491 10:36:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:26.864 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:26.864 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:26.864 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.864 10:36:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:26.864 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.864 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:26.864 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.864 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.122 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.122 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.122 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.122 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.387 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.387 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.387 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.387 10:36:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.709 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.709 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:27.709 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.709 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:27.993 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.993 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:27.993 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.993 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.252 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.252 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:28.510 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:28.510 10:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:28.768 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:29.025 10:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:29.960 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:29.960 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:29.960 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.960 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:30.218 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.218 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:30.218 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.475 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:30.732 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.732 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:30.732 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.732 10:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.990 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.990 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.990 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.990 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:31.248 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.248 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:31.248 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.248 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:31.506 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.506 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:31.506 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.506 10:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:31.763 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.763 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:31.763 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:32.020 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:32.276 10:37:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:33.218 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:33.218 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:33.218 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.218 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:33.475 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:33.475 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:33.475 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.475 10:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:33.731 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.731 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:33.731 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.731 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:33.988 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:33.988 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:33.988 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:33.989 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:34.246 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.246 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:34.246 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.246 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:34.520 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.520 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:34.520 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.520 10:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:34.778 10:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.778 10:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:34.778 10:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:35.345 10:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:35.603 10:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:36.537 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:36.537 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:36.537 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.537 10:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:36.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:36.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.795 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:37.053 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.053 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:37.053 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.053 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:37.333 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.333 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:37.333 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.333 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:37.591 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.591 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:37.592 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.592 10:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:37.850 10:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:37.850 10:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:37.850 10:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:37.850 10:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:38.108 10:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.108 10:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:38.108 10:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:38.365 10:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:38.622 10:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:39.991 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:39.991 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:39.991 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.991 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:39.991 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.991 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:39.991 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.991 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:40.248 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:40.248 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:40.248 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.248 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:40.505 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.505 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:40.505 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.505 10:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:40.762 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:40.762 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:40.762 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:40.763 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:41.020 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:41.020 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:41.020 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:41.020 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:41.277 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:41.277 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2623499 00:26:41.277 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2623499 ']' 00:26:41.277 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2623499 00:26:41.277 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:41.277 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.277 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2623499 00:26:41.534 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:41.534 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:41.534 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2623499' 00:26:41.534 killing process with pid 2623499 00:26:41.534 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2623499 00:26:41.534 10:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2623499 00:26:41.534 { 00:26:41.534 "results": [ 00:26:41.534 { 00:26:41.534 "job": "Nvme0n1", 00:26:41.534 "core_mask": "0x4", 00:26:41.534 "workload": "verify", 00:26:41.534 "status": "terminated", 00:26:41.534 "verify_range": { 00:26:41.534 "start": 0, 00:26:41.534 "length": 16384 00:26:41.534 }, 00:26:41.534 "queue_depth": 128, 00:26:41.534 "io_size": 4096, 00:26:41.534 "runtime": 34.289948, 00:26:41.534 "iops": 7689.221342651205, 00:26:41.534 "mibps": 30.03602086973127, 00:26:41.534 "io_failed": 0, 00:26:41.534 "io_timeout": 0, 00:26:41.534 "avg_latency_us": 16611.78180243271, 00:26:41.534 "min_latency_us": 1638.4, 00:26:41.534 "max_latency_us": 4026531.84 00:26:41.534 } 00:26:41.534 ], 00:26:41.534 "core_count": 1 00:26:41.534 } 00:26:41.802 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2623499 00:26:41.802 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:41.802 [2024-12-09 10:36:37.862404] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:26:41.802 [2024-12-09 10:36:37.862500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2623499 ] 00:26:41.802 [2024-12-09 10:36:37.932177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:41.802 [2024-12-09 10:36:37.993532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.802 Running I/O for 90 seconds... 00:26:41.802 8190.00 IOPS, 31.99 MiB/s [2024-12-09T09:37:14.243Z] 8251.00 IOPS, 32.23 MiB/s [2024-12-09T09:37:14.243Z] 8248.67 IOPS, 32.22 MiB/s [2024-12-09T09:37:14.243Z] 8271.75 IOPS, 32.31 MiB/s [2024-12-09T09:37:14.243Z] 8269.20 IOPS, 32.30 MiB/s [2024-12-09T09:37:14.243Z] 8279.83 IOPS, 32.34 MiB/s [2024-12-09T09:37:14.243Z] 8239.29 IOPS, 32.18 MiB/s [2024-12-09T09:37:14.243Z] 8210.75 IOPS, 32.07 MiB/s [2024-12-09T09:37:14.243Z] 8195.33 IOPS, 32.01 MiB/s [2024-12-09T09:37:14.243Z] 8212.70 IOPS, 32.08 MiB/s [2024-12-09T09:37:14.243Z] 8223.82 IOPS, 32.12 MiB/s [2024-12-09T09:37:14.243Z] 8206.92 IOPS, 32.06 MiB/s [2024-12-09T09:37:14.243Z] 8233.46 IOPS, 32.16 MiB/s [2024-12-09T09:37:14.243Z] 8231.64 IOPS, 32.15 MiB/s [2024-12-09T09:37:14.243Z] 8241.27 IOPS, 32.19 MiB/s [2024-12-09T09:37:14.243Z] [2024-12-09 10:36:54.469678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.469733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.469797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.469819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.469843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.469860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.469882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.469899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.469920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.469936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.469957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.469973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.469995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.470011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.470032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.470048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.470738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.470760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.470800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.470817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.470838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.470854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.470875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.470890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.470911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.470926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.470947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.470962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.470983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.470997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.803 [2024-12-09 10:36:54.471903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:41.803 [2024-12-09 10:36:54.471924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.471940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.471962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.471977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.471999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.472014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.804 [2024-12-09 10:36:54.472477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.804 [2024-12-09 10:36:54.472527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.804 [2024-12-09 10:36:54.472571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.804 [2024-12-09 10:36:54.472613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.804 [2024-12-09 10:36:54.472655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.804 [2024-12-09 10:36:54.472722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.804 [2024-12-09 10:36:54.472764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.472819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.472859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.472898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.472938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.472962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.472978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:75096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:75104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:75112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:75120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:75128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:75144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:75160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.473971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.473996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:75168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.474011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.474036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:75176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.474051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.474201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:75184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.804 [2024-12-09 10:36:54.474224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:41.804 [2024-12-09 10:36:54.474258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:75192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:75216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:75240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:75256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:75264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:75272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.805 [2024-12-09 10:36:54.474787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:75288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:75296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:75304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.474981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:75312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.474996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:75320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.475038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:75328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.475084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.475158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:75344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.475208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:75352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.475251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:75360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:36:54.475294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.805 [2024-12-09 10:36:54.475337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.805 [2024-12-09 10:36:54.475380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.805 [2024-12-09 10:36:54.475424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.805 [2024-12-09 10:36:54.475481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.805 [2024-12-09 10:36:54.475523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.805 [2024-12-09 10:36:54.475565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:36:54.475591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.805 [2024-12-09 10:36:54.475606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:41.805 7750.12 IOPS, 30.27 MiB/s [2024-12-09T09:37:14.246Z] 7294.24 IOPS, 28.49 MiB/s [2024-12-09T09:37:14.246Z] 6889.00 IOPS, 26.91 MiB/s [2024-12-09T09:37:14.246Z] 6526.42 IOPS, 25.49 MiB/s [2024-12-09T09:37:14.246Z] 6590.50 IOPS, 25.74 MiB/s [2024-12-09T09:37:14.246Z] 6661.10 IOPS, 26.02 MiB/s [2024-12-09T09:37:14.246Z] 6759.59 IOPS, 26.40 MiB/s [2024-12-09T09:37:14.246Z] 6932.57 IOPS, 27.08 MiB/s [2024-12-09T09:37:14.246Z] 7091.88 IOPS, 27.70 MiB/s [2024-12-09T09:37:14.246Z] 7225.96 IOPS, 28.23 MiB/s [2024-12-09T09:37:14.246Z] 7252.96 IOPS, 28.33 MiB/s [2024-12-09T09:37:14.246Z] 7282.93 IOPS, 28.45 MiB/s [2024-12-09T09:37:14.246Z] 7308.04 IOPS, 28.55 MiB/s [2024-12-09T09:37:14.246Z] 7390.52 IOPS, 28.87 MiB/s [2024-12-09T09:37:14.246Z] 7494.17 IOPS, 29.27 MiB/s [2024-12-09T09:37:14.246Z] 7592.26 IOPS, 29.66 MiB/s [2024-12-09T09:37:14.246Z] [2024-12-09 10:37:11.006781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:37:11.006852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:37:11.006893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:37:11.006913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:37:11.006944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:37:11.006976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:37:11.006999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:37:11.007016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:37:11.007039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:93928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:37:11.007055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:37:11.007076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.805 [2024-12-09 10:37:11.007092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.805 [2024-12-09 10:37:11.007129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.007968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.007989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.806 [2024-12-09 10:37:11.008006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.806 [2024-12-09 10:37:11.008044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.806 [2024-12-09 10:37:11.008083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.806 [2024-12-09 10:37:11.008145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.008558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.008603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.008644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.008683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.008728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.008769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.008815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.008854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.008892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.008931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.008969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.008991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.009007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.009029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.009045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.009068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.806 [2024-12-09 10:37:11.009083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:41.806 [2024-12-09 10:37:11.009105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.009616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.009653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.009690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.009732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.009788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.009827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.009865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.009904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.009942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.009965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.009981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.010020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.010073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.010111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.010176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.010216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.010261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.010298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.010337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.807 [2024-12-09 10:37:11.010375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.010413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.010459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.010482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.010498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:41.807 [2024-12-09 10:37:11.011018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.807 [2024-12-09 10:37:11.011041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:93944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.011589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.808 [2024-12-09 10:37:11.011626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.011647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.808 [2024-12-09 10:37:11.011664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.012658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.012682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.012710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.012733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.012757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.012773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.012795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.012811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.012833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.012848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.012870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.012886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.012908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.012924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.012946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.012962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.012984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.013000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.013037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.013091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.013128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.013194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.808 [2024-12-09 10:37:11.013232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.808 [2024-12-09 10:37:11.013275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.013313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.808 [2024-12-09 10:37:11.013350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.808 [2024-12-09 10:37:11.013389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.808 [2024-12-09 10:37:11.013431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.808 [2024-12-09 10:37:11.013486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.808 [2024-12-09 10:37:11.013524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.808 [2024-12-09 10:37:11.013560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.013597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:41.808 [2024-12-09 10:37:11.013619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.808 [2024-12-09 10:37:11.013635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.015632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.015657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.015685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.015704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.015732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.015750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.015772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.015788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.015811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.015827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.015849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.015866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.015888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.015904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.015941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.015957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.015980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.015996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.016033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.016070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.016106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.016171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:93912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.016500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.016761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.016835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.016871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.016908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.016930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.016945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.019720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.809 [2024-12-09 10:37:11.019746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.019776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.019796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.019819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.019836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.019858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.019874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.019897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.019913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:41.809 [2024-12-09 10:37:11.019936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.809 [2024-12-09 10:37:11.019957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.019981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.019999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.020038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.020077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.020116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.020444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.020967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.020989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.021004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.021025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.021040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.021062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.021077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.021098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.021113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.021159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.021177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.021736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.021759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.021786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.810 [2024-12-09 10:37:11.021805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.021827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.021843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.021865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.021881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.021904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.021921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.021943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.021959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.021981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.022002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.022026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.022042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:41.810 [2024-12-09 10:37:11.022064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.810 [2024-12-09 10:37:11.022080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.022629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.022667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.022705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.022726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.022743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.023187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.023254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.023296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.023335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:94824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.023372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.023410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.023454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.023507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.023560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.023596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.023632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.023676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.023712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.023734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.023749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.024323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:94920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.811 [2024-12-09 10:37:11.024348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.024375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.024393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.024416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.024441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.024478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.024494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:41.811 [2024-12-09 10:37:11.024522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.811 [2024-12-09 10:37:11.024539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.024560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.024577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.024615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.024632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.024655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.024671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.024693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.024709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.024731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.024748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.024771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.024787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.024809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.024825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.024847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.024864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.024903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.024921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.024943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.024958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.024980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.024996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.025018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.025038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.025061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.025077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.025100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.025116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.025137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.025163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.025196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.025212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.027452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.027512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:94928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.027549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.027586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.027622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.027660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.027698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.027739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.027776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.027814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.027850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.027885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.027922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.027958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.027979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:94352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.027995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.028015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.028030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.028051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.028066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.028087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.028103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.028149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.028169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.028193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.028210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.028238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:94736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.812 [2024-12-09 10:37:11.028261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.028284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.028299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:41.812 [2024-12-09 10:37:11.028321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.812 [2024-12-09 10:37:11.028337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.028359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.028375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.030982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.031007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.031052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.031089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.031150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.031193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.031231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.031825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.031845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.032828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.032851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.032876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.032893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.032913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:94496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.032929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.032950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.032965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.033018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.033056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.033094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.033132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.033183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.033221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.033258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.033302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:94936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.033341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.813 [2024-12-09 10:37:11.033379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:94832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.033416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.033454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:41.813 [2024-12-09 10:37:11.033476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.813 [2024-12-09 10:37:11.033492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.033529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:95016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.033567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.033605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.033643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.033681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.033718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.033771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.033825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.033863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.033899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.033937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.033958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.033973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.035125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.035382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.035426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.035464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.035502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.035749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.035765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.036211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.036235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.036262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.036281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.036309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.036326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.036348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.036364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.036387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.036403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.036440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.036456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.036477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.036493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.036530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.814 [2024-12-09 10:37:11.036546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.036568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.814 [2024-12-09 10:37:11.036585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:41.814 [2024-12-09 10:37:11.036606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.036623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.036645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.036660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.036682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.036698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.036720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:94608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.036736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.036758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.036774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.036796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.036816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.036840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.036856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.036877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.036893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.036915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.036947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.036969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.036984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.037005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.037021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.037042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.037058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.037079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.037095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.038896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.038920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.038949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.038967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.038989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.039005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.039043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.039086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.039126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.039175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.039231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.039382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.039568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.039604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.039750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.815 [2024-12-09 10:37:11.039786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.815 [2024-12-09 10:37:11.039927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:41.815 [2024-12-09 10:37:11.039947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.039962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.039983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.039998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.040018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.040032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.040057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.040072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.040092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.040107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.040150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.040168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.042478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.042524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.042562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.042616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.042654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.042690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.042741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.042778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.042831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.042874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.042913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.042951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.042972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.042988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.043026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.043063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.043101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.043148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.043190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.043228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.043266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.043303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.043346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.043385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.043422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.043460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.043482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.043498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.044658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.044681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.044707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.816 [2024-12-09 10:37:11.044723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.044745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.816 [2024-12-09 10:37:11.044760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:41.816 [2024-12-09 10:37:11.044781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.044796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.044816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.044831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.044852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.044867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.044903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.044919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.044941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.044958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.044985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.045003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.045040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.045078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.045116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.045164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.045204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.045242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.045279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.045317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.045355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:95968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.045393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.045430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.045473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.045510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.045525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.046194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.046233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.046270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.046440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.046535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.046572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.046607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.817 [2024-12-09 10:37:11.046786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.817 [2024-12-09 10:37:11.046838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:41.817 [2024-12-09 10:37:11.046860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.046875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.046914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.046930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.047978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.048005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.048064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.048105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.048155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.048197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.048236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.048274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.048313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.048352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.048391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.048429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.048468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.048506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.048550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.048605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.048656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.048694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.048729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.048765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.048801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.048821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.048837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.049699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.049722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.049748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.049764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.049786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.049801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.049822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.049837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.049863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.049894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.049917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.049933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.049970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.049986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.050008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.050024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.050046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:95624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.050062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.050083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.050099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.050120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.050136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.050172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.050189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.051634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.051658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.051685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.818 [2024-12-09 10:37:11.051704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.051726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.051742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.051765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.818 [2024-12-09 10:37:11.051781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:41.818 [2024-12-09 10:37:11.051803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.051827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.051851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.051867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.051889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.051906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.051928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.051943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.051965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.051981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.052897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.052954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.052969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.055112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.055137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.055174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.055199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.055221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.055238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.055261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.055277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.055299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.055315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.055336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.055352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.055374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.055389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.055427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.055443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.055470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.819 [2024-12-09 10:37:11.055501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.055523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.819 [2024-12-09 10:37:11.055538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:41.819 [2024-12-09 10:37:11.055559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:95744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.055574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.055593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.055608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.055629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.055643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.055663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:95616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.055678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.055699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.055714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.055734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.055749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.055769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.055800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.055822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.055837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.055876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.055892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.055914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.055930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.056919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.056943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.057146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.057264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.057522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.057596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.057649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.057687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.057724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.057746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.057762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.058760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.820 [2024-12-09 10:37:11.058784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.058811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.058829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.058852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.058868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.058890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.058905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.058927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.820 [2024-12-09 10:37:11.058948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:41.820 [2024-12-09 10:37:11.058972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.821 [2024-12-09 10:37:11.058988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:41.821 [2024-12-09 10:37:11.059010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.821 [2024-12-09 10:37:11.059025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:41.821 [2024-12-09 10:37:11.059048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.821 [2024-12-09 10:37:11.059064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:41.821 [2024-12-09 10:37:11.059103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.821 [2024-12-09 10:37:11.059120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:41.821 7651.47 IOPS, 29.89 MiB/s [2024-12-09T09:37:14.262Z] 7668.70 IOPS, 29.96 MiB/s [2024-12-09T09:37:14.262Z] 7686.12 IOPS, 30.02 MiB/s [2024-12-09T09:37:14.262Z] Received shutdown signal, test time was about 34.290753 seconds 00:26:41.821 00:26:41.821 Latency(us) 00:26:41.821 [2024-12-09T09:37:14.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.821 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:41.821 Verification LBA range: start 0x0 length 0x4000 00:26:41.821 Nvme0n1 : 34.29 7689.22 30.04 0.00 0.00 16611.78 1638.40 4026531.84 00:26:41.821 [2024-12-09T09:37:14.262Z] =================================================================================================================== 00:26:41.821 [2024-12-09T09:37:14.262Z] Total : 7689.22 30.04 0.00 0.00 16611.78 1638.40 4026531.84 00:26:41.821 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:42.078 rmmod nvme_tcp 00:26:42.078 rmmod nvme_fabrics 00:26:42.078 rmmod nvme_keyring 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2623290 ']' 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2623290 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2623290 ']' 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2623290 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2623290 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2623290' 00:26:42.078 killing process with pid 2623290 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2623290 00:26:42.078 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2623290 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.351 10:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:44.886 00:26:44.886 real 0m43.335s 00:26:44.886 user 2m11.017s 00:26:44.886 sys 0m10.971s 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:44.886 ************************************ 00:26:44.886 END TEST nvmf_host_multipath_status 00:26:44.886 ************************************ 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.886 ************************************ 00:26:44.886 START TEST nvmf_discovery_remove_ifc 00:26:44.886 ************************************ 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:44.886 * Looking for test storage... 00:26:44.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:44.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.886 --rc genhtml_branch_coverage=1 00:26:44.886 --rc genhtml_function_coverage=1 00:26:44.886 --rc genhtml_legend=1 00:26:44.886 --rc geninfo_all_blocks=1 00:26:44.886 --rc geninfo_unexecuted_blocks=1 00:26:44.886 00:26:44.886 ' 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:44.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.886 --rc genhtml_branch_coverage=1 00:26:44.886 --rc genhtml_function_coverage=1 00:26:44.886 --rc genhtml_legend=1 00:26:44.886 --rc geninfo_all_blocks=1 00:26:44.886 --rc geninfo_unexecuted_blocks=1 00:26:44.886 00:26:44.886 ' 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:44.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.886 --rc genhtml_branch_coverage=1 00:26:44.886 --rc genhtml_function_coverage=1 00:26:44.886 --rc genhtml_legend=1 00:26:44.886 --rc geninfo_all_blocks=1 00:26:44.886 --rc geninfo_unexecuted_blocks=1 00:26:44.886 00:26:44.886 ' 00:26:44.886 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:44.886 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.886 --rc genhtml_branch_coverage=1 00:26:44.886 --rc genhtml_function_coverage=1 00:26:44.886 --rc genhtml_legend=1 00:26:44.886 --rc geninfo_all_blocks=1 00:26:44.886 --rc geninfo_unexecuted_blocks=1 00:26:44.886 00:26:44.886 ' 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:44.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:44.887 10:37:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:46.791 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.791 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:46.791 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:46.791 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:46.791 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:46.791 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:46.791 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:46.791 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:46.791 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:46.791 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:46.792 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:46.792 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:46.792 Found net devices under 0000:09:00.0: cvl_0_0 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:46.792 Found net devices under 0000:09:00.1: cvl_0_1 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.792 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.051 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.051 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:47.051 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.051 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.051 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:47.051 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:47.051 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.051 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.051 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:47.051 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:47.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:26:47.052 00:26:47.052 --- 10.0.0.2 ping statistics --- 00:26:47.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.052 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:26:47.052 00:26:47.052 --- 10.0.0.1 ping statistics --- 00:26:47.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.052 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2629971 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2629971 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2629971 ']' 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.052 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.052 [2024-12-09 10:37:19.441621] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:26:47.052 [2024-12-09 10:37:19.441696] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.310 [2024-12-09 10:37:19.513836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.310 [2024-12-09 10:37:19.570101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.310 [2024-12-09 10:37:19.570169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.310 [2024-12-09 10:37:19.570192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.310 [2024-12-09 10:37:19.570210] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.310 [2024-12-09 10:37:19.570225] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.310 [2024-12-09 10:37:19.570805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.310 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.310 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:47.310 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:47.311 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:47.311 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:47.311 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:47.311 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.311 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.311 [2024-12-09 10:37:19.708879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:47.311 [2024-12-09 10:37:19.717055] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:47.311 null0 00:26:47.311 [2024-12-09 10:37:19.749008] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:47.569 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.569 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2629990 00:26:47.569 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:47.569 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2629990 /tmp/host.sock 00:26:47.569 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2629990 ']' 00:26:47.569 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:47.569 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.569 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:47.569 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:47.569 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.569 10:37:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.569 [2024-12-09 10:37:19.815440] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:26:47.569 [2024-12-09 10:37:19.815522] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2629990 ] 00:26:47.569 [2024-12-09 10:37:19.881240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:47.569 [2024-12-09 10:37:19.945814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.827 10:37:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:48.761 [2024-12-09 10:37:21.186030] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:48.761 [2024-12-09 10:37:21.186055] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:48.761 [2024-12-09 10:37:21.186077] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:49.019 [2024-12-09 10:37:21.313567] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:49.019 [2024-12-09 10:37:21.455664] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:49.019 [2024-12-09 10:37:21.456741] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1386650:1 started. 00:26:49.019 [2024-12-09 10:37:21.458527] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:49.019 [2024-12-09 10:37:21.458587] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:49.019 [2024-12-09 10:37:21.458645] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:49.019 [2024-12-09 10:37:21.458671] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:49.019 [2024-12-09 10:37:21.458723] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:49.019 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.019 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.277 [2024-12-09 10:37:21.465247] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1386650 was disconnected and freed. delete nvme_qpair. 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:49.277 10:37:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:50.210 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:50.210 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:50.210 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:50.210 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.210 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:50.210 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.210 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:50.210 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.210 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:50.210 10:37:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:51.579 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:51.579 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.579 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:51.580 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.580 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:51.580 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.580 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:51.580 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.580 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:51.580 10:37:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:52.509 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.509 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.509 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.509 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.509 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.509 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.509 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.509 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.509 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:52.509 10:37:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.447 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.447 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.447 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.447 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.447 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.447 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.447 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.447 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.447 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:53.447 10:37:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.378 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.378 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.378 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.378 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.378 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.378 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.378 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.378 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.636 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:54.636 10:37:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.636 [2024-12-09 10:37:26.900002] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:54.636 [2024-12-09 10:37:26.900063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.636 [2024-12-09 10:37:26.900082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.636 [2024-12-09 10:37:26.900098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.636 [2024-12-09 10:37:26.900110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.636 [2024-12-09 10:37:26.900146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.636 [2024-12-09 10:37:26.900162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.636 [2024-12-09 10:37:26.900175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.636 [2024-12-09 10:37:26.900188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.636 [2024-12-09 10:37:26.900201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.636 [2024-12-09 10:37:26.900214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.636 [2024-12-09 10:37:26.900237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362e90 is same with the state(6) to be set 00:26:54.636 [2024-12-09 10:37:26.910022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362e90 (9): Bad file descriptor 00:26:54.636 [2024-12-09 10:37:26.920062] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:54.636 [2024-12-09 10:37:26.920084] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:54.636 [2024-12-09 10:37:26.920096] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:54.636 [2024-12-09 10:37:26.920105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:54.636 [2024-12-09 10:37:26.920161] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:55.568 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:55.568 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:55.569 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.569 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:55.569 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:55.569 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.569 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:55.569 [2024-12-09 10:37:27.937172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:55.569 [2024-12-09 10:37:27.937211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1362e90 with addr=10.0.0.2, port=4420 00:26:55.569 [2024-12-09 10:37:27.937228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362e90 is same with the state(6) to be set 00:26:55.569 [2024-12-09 10:37:27.937252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362e90 (9): Bad file descriptor 00:26:55.569 [2024-12-09 10:37:27.937623] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:55.569 [2024-12-09 10:37:27.937656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:55.569 [2024-12-09 10:37:27.937671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:55.569 [2024-12-09 10:37:27.937686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:55.569 [2024-12-09 10:37:27.937698] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:55.569 [2024-12-09 10:37:27.937708] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:55.569 [2024-12-09 10:37:27.937716] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:55.569 [2024-12-09 10:37:27.937728] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:55.569 [2024-12-09 10:37:27.937736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:55.569 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.569 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:55.569 10:37:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:56.501 [2024-12-09 10:37:28.940222] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:56.501 [2024-12-09 10:37:28.940275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:56.501 [2024-12-09 10:37:28.940303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:56.502 [2024-12-09 10:37:28.940317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:56.502 [2024-12-09 10:37:28.940331] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:56.502 [2024-12-09 10:37:28.940345] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:56.502 [2024-12-09 10:37:28.940356] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:56.502 [2024-12-09 10:37:28.940364] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:56.502 [2024-12-09 10:37:28.940410] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:56.502 [2024-12-09 10:37:28.940468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.502 [2024-12-09 10:37:28.940489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.502 [2024-12-09 10:37:28.940524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.502 [2024-12-09 10:37:28.940539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.502 [2024-12-09 10:37:28.940553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.502 [2024-12-09 10:37:28.940565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.502 [2024-12-09 10:37:28.940595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.502 [2024-12-09 10:37:28.940608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.502 [2024-12-09 10:37:28.940621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:56.502 [2024-12-09 10:37:28.940634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:56.502 [2024-12-09 10:37:28.940647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:56.502 [2024-12-09 10:37:28.940726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13525e0 (9): Bad file descriptor 00:26:56.502 [2024-12-09 10:37:28.941710] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:56.502 [2024-12-09 10:37:28.941733] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:56.759 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:56.759 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.759 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:56.759 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.759 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.759 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:56.759 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:56.759 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.759 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:56.759 10:37:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:56.759 10:37:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:57.698 10:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.698 10:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.698 10:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.698 10:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:57.698 10:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.698 10:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.698 10:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.698 10:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:57.698 10:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:58.031 10:37:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:58.630 [2024-12-09 10:37:30.992344] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:58.630 [2024-12-09 10:37:30.992381] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:58.630 [2024-12-09 10:37:30.992405] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:58.890 [2024-12-09 10:37:31.078673] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:58.890 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.890 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.890 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.890 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:58.890 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.890 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.890 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.890 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:58.890 [2024-12-09 10:37:31.174558] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:58.890 [2024-12-09 10:37:31.175386] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x138ff60:1 started. 00:26:58.890 [2024-12-09 10:37:31.176684] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:58.890 [2024-12-09 10:37:31.176726] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:58.890 [2024-12-09 10:37:31.176757] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:58.890 [2024-12-09 10:37:31.176779] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:58.890 [2024-12-09 10:37:31.176793] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:58.890 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:58.890 10:37:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:58.890 [2024-12-09 10:37:31.181088] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x138ff60 was disconnected and freed. delete nvme_qpair. 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2629990 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2629990 ']' 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2629990 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.826 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2629990 00:27:00.084 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:00.084 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:00.084 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2629990' 00:27:00.084 killing process with pid 2629990 00:27:00.084 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2629990 00:27:00.084 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2629990 00:27:00.084 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:00.084 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:00.084 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:00.343 rmmod nvme_tcp 00:27:00.343 rmmod nvme_fabrics 00:27:00.343 rmmod nvme_keyring 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2629971 ']' 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2629971 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2629971 ']' 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2629971 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2629971 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2629971' 00:27:00.343 killing process with pid 2629971 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2629971 00:27:00.343 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2629971 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:00.602 10:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.506 10:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:02.506 00:27:02.506 real 0m18.151s 00:27:02.506 user 0m26.191s 00:27:02.506 sys 0m3.143s 00:27:02.506 10:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.506 10:37:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.506 ************************************ 00:27:02.506 END TEST nvmf_discovery_remove_ifc 00:27:02.506 ************************************ 00:27:02.764 10:37:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:02.764 10:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:02.764 10:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.764 10:37:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.764 ************************************ 00:27:02.764 START TEST nvmf_identify_kernel_target 00:27:02.764 ************************************ 00:27:02.764 10:37:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:02.764 * Looking for test storage... 00:27:02.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:02.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.764 --rc genhtml_branch_coverage=1 00:27:02.764 --rc genhtml_function_coverage=1 00:27:02.764 --rc genhtml_legend=1 00:27:02.764 --rc geninfo_all_blocks=1 00:27:02.764 --rc geninfo_unexecuted_blocks=1 00:27:02.764 00:27:02.764 ' 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:02.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.764 --rc genhtml_branch_coverage=1 00:27:02.764 --rc genhtml_function_coverage=1 00:27:02.764 --rc genhtml_legend=1 00:27:02.764 --rc geninfo_all_blocks=1 00:27:02.764 --rc geninfo_unexecuted_blocks=1 00:27:02.764 00:27:02.764 ' 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:02.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.764 --rc genhtml_branch_coverage=1 00:27:02.764 --rc genhtml_function_coverage=1 00:27:02.764 --rc genhtml_legend=1 00:27:02.764 --rc geninfo_all_blocks=1 00:27:02.764 --rc geninfo_unexecuted_blocks=1 00:27:02.764 00:27:02.764 ' 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:02.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.764 --rc genhtml_branch_coverage=1 00:27:02.764 --rc genhtml_function_coverage=1 00:27:02.764 --rc genhtml_legend=1 00:27:02.764 --rc geninfo_all_blocks=1 00:27:02.764 --rc geninfo_unexecuted_blocks=1 00:27:02.764 00:27:02.764 ' 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.764 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:02.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:02.765 10:37:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:05.296 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:05.297 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:05.297 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:05.297 Found net devices under 0000:09:00.0: cvl_0_0 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:05.297 Found net devices under 0000:09:00.1: cvl_0_1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:05.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:27:05.297 00:27:05.297 --- 10.0.0.2 ping statistics --- 00:27:05.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.297 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:27:05.297 00:27:05.297 --- 10.0.0.1 ping statistics --- 00:27:05.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.297 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:05.297 10:37:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:06.234 Waiting for block devices as requested 00:27:06.234 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:06.494 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:06.494 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:06.494 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:06.753 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:06.753 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:06.753 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:07.012 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:07.012 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:07.012 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:07.272 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:07.272 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:07.272 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:07.272 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:07.532 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:07.532 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:07.532 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:07.792 No valid GPT data, bailing 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:27:07.792 00:27:07.792 Discovery Log Number of Records 2, Generation counter 2 00:27:07.792 =====Discovery Log Entry 0====== 00:27:07.792 trtype: tcp 00:27:07.792 adrfam: ipv4 00:27:07.792 subtype: current discovery subsystem 00:27:07.792 treq: not specified, sq flow control disable supported 00:27:07.792 portid: 1 00:27:07.792 trsvcid: 4420 00:27:07.792 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:07.792 traddr: 10.0.0.1 00:27:07.792 eflags: none 00:27:07.792 sectype: none 00:27:07.792 =====Discovery Log Entry 1====== 00:27:07.792 trtype: tcp 00:27:07.792 adrfam: ipv4 00:27:07.792 subtype: nvme subsystem 00:27:07.792 treq: not specified, sq flow control disable supported 00:27:07.792 portid: 1 00:27:07.792 trsvcid: 4420 00:27:07.792 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:07.792 traddr: 10.0.0.1 00:27:07.792 eflags: none 00:27:07.792 sectype: none 00:27:07.792 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:07.792 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:08.051 ===================================================== 00:27:08.051 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:08.051 ===================================================== 00:27:08.051 Controller Capabilities/Features 00:27:08.051 ================================ 00:27:08.051 Vendor ID: 0000 00:27:08.051 Subsystem Vendor ID: 0000 00:27:08.051 Serial Number: 6f8d9b48b1db26791adf 00:27:08.051 Model Number: Linux 00:27:08.051 Firmware Version: 6.8.9-20 00:27:08.051 Recommended Arb Burst: 0 00:27:08.051 IEEE OUI Identifier: 00 00 00 00:27:08.051 Multi-path I/O 00:27:08.051 May have multiple subsystem ports: No 00:27:08.051 May have multiple controllers: No 00:27:08.051 Associated with SR-IOV VF: No 00:27:08.051 Max Data Transfer Size: Unlimited 00:27:08.051 Max Number of Namespaces: 0 00:27:08.051 Max Number of I/O Queues: 1024 00:27:08.051 NVMe Specification Version (VS): 1.3 00:27:08.051 NVMe Specification Version (Identify): 1.3 00:27:08.051 Maximum Queue Entries: 1024 00:27:08.051 Contiguous Queues Required: No 00:27:08.051 Arbitration Mechanisms Supported 00:27:08.051 Weighted Round Robin: Not Supported 00:27:08.051 Vendor Specific: Not Supported 00:27:08.051 Reset Timeout: 7500 ms 00:27:08.051 Doorbell Stride: 4 bytes 00:27:08.051 NVM Subsystem Reset: Not Supported 00:27:08.051 Command Sets Supported 00:27:08.051 NVM Command Set: Supported 00:27:08.051 Boot Partition: Not Supported 00:27:08.051 Memory Page Size Minimum: 4096 bytes 00:27:08.051 Memory Page Size Maximum: 4096 bytes 00:27:08.051 Persistent Memory Region: Not Supported 00:27:08.051 Optional Asynchronous Events Supported 00:27:08.051 Namespace Attribute Notices: Not Supported 00:27:08.051 Firmware Activation Notices: Not Supported 00:27:08.051 ANA Change Notices: Not Supported 00:27:08.051 PLE Aggregate Log Change Notices: Not Supported 00:27:08.051 LBA Status Info Alert Notices: Not Supported 00:27:08.051 EGE Aggregate Log Change Notices: Not Supported 00:27:08.051 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.051 Zone Descriptor Change Notices: Not Supported 00:27:08.051 Discovery Log Change Notices: Supported 00:27:08.051 Controller Attributes 00:27:08.051 128-bit Host Identifier: Not Supported 00:27:08.051 Non-Operational Permissive Mode: Not Supported 00:27:08.051 NVM Sets: Not Supported 00:27:08.051 Read Recovery Levels: Not Supported 00:27:08.051 Endurance Groups: Not Supported 00:27:08.051 Predictable Latency Mode: Not Supported 00:27:08.051 Traffic Based Keep ALive: Not Supported 00:27:08.051 Namespace Granularity: Not Supported 00:27:08.051 SQ Associations: Not Supported 00:27:08.051 UUID List: Not Supported 00:27:08.051 Multi-Domain Subsystem: Not Supported 00:27:08.051 Fixed Capacity Management: Not Supported 00:27:08.051 Variable Capacity Management: Not Supported 00:27:08.051 Delete Endurance Group: Not Supported 00:27:08.051 Delete NVM Set: Not Supported 00:27:08.051 Extended LBA Formats Supported: Not Supported 00:27:08.051 Flexible Data Placement Supported: Not Supported 00:27:08.051 00:27:08.051 Controller Memory Buffer Support 00:27:08.051 ================================ 00:27:08.051 Supported: No 00:27:08.051 00:27:08.051 Persistent Memory Region Support 00:27:08.051 ================================ 00:27:08.051 Supported: No 00:27:08.051 00:27:08.051 Admin Command Set Attributes 00:27:08.051 ============================ 00:27:08.051 Security Send/Receive: Not Supported 00:27:08.051 Format NVM: Not Supported 00:27:08.051 Firmware Activate/Download: Not Supported 00:27:08.051 Namespace Management: Not Supported 00:27:08.051 Device Self-Test: Not Supported 00:27:08.051 Directives: Not Supported 00:27:08.051 NVMe-MI: Not Supported 00:27:08.051 Virtualization Management: Not Supported 00:27:08.051 Doorbell Buffer Config: Not Supported 00:27:08.051 Get LBA Status Capability: Not Supported 00:27:08.051 Command & Feature Lockdown Capability: Not Supported 00:27:08.051 Abort Command Limit: 1 00:27:08.051 Async Event Request Limit: 1 00:27:08.051 Number of Firmware Slots: N/A 00:27:08.051 Firmware Slot 1 Read-Only: N/A 00:27:08.051 Firmware Activation Without Reset: N/A 00:27:08.051 Multiple Update Detection Support: N/A 00:27:08.051 Firmware Update Granularity: No Information Provided 00:27:08.051 Per-Namespace SMART Log: No 00:27:08.051 Asymmetric Namespace Access Log Page: Not Supported 00:27:08.051 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:08.051 Command Effects Log Page: Not Supported 00:27:08.051 Get Log Page Extended Data: Supported 00:27:08.051 Telemetry Log Pages: Not Supported 00:27:08.051 Persistent Event Log Pages: Not Supported 00:27:08.051 Supported Log Pages Log Page: May Support 00:27:08.051 Commands Supported & Effects Log Page: Not Supported 00:27:08.051 Feature Identifiers & Effects Log Page:May Support 00:27:08.051 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.051 Data Area 4 for Telemetry Log: Not Supported 00:27:08.051 Error Log Page Entries Supported: 1 00:27:08.052 Keep Alive: Not Supported 00:27:08.052 00:27:08.052 NVM Command Set Attributes 00:27:08.052 ========================== 00:27:08.052 Submission Queue Entry Size 00:27:08.052 Max: 1 00:27:08.052 Min: 1 00:27:08.052 Completion Queue Entry Size 00:27:08.052 Max: 1 00:27:08.052 Min: 1 00:27:08.052 Number of Namespaces: 0 00:27:08.052 Compare Command: Not Supported 00:27:08.052 Write Uncorrectable Command: Not Supported 00:27:08.052 Dataset Management Command: Not Supported 00:27:08.052 Write Zeroes Command: Not Supported 00:27:08.052 Set Features Save Field: Not Supported 00:27:08.052 Reservations: Not Supported 00:27:08.052 Timestamp: Not Supported 00:27:08.052 Copy: Not Supported 00:27:08.052 Volatile Write Cache: Not Present 00:27:08.052 Atomic Write Unit (Normal): 1 00:27:08.052 Atomic Write Unit (PFail): 1 00:27:08.052 Atomic Compare & Write Unit: 1 00:27:08.052 Fused Compare & Write: Not Supported 00:27:08.052 Scatter-Gather List 00:27:08.052 SGL Command Set: Supported 00:27:08.052 SGL Keyed: Not Supported 00:27:08.052 SGL Bit Bucket Descriptor: Not Supported 00:27:08.052 SGL Metadata Pointer: Not Supported 00:27:08.052 Oversized SGL: Not Supported 00:27:08.052 SGL Metadata Address: Not Supported 00:27:08.052 SGL Offset: Supported 00:27:08.052 Transport SGL Data Block: Not Supported 00:27:08.052 Replay Protected Memory Block: Not Supported 00:27:08.052 00:27:08.052 Firmware Slot Information 00:27:08.052 ========================= 00:27:08.052 Active slot: 0 00:27:08.052 00:27:08.052 00:27:08.052 Error Log 00:27:08.052 ========= 00:27:08.052 00:27:08.052 Active Namespaces 00:27:08.052 ================= 00:27:08.052 Discovery Log Page 00:27:08.052 ================== 00:27:08.052 Generation Counter: 2 00:27:08.052 Number of Records: 2 00:27:08.052 Record Format: 0 00:27:08.052 00:27:08.052 Discovery Log Entry 0 00:27:08.052 ---------------------- 00:27:08.052 Transport Type: 3 (TCP) 00:27:08.052 Address Family: 1 (IPv4) 00:27:08.052 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:08.052 Entry Flags: 00:27:08.052 Duplicate Returned Information: 0 00:27:08.052 Explicit Persistent Connection Support for Discovery: 0 00:27:08.052 Transport Requirements: 00:27:08.052 Secure Channel: Not Specified 00:27:08.052 Port ID: 1 (0x0001) 00:27:08.052 Controller ID: 65535 (0xffff) 00:27:08.052 Admin Max SQ Size: 32 00:27:08.052 Transport Service Identifier: 4420 00:27:08.052 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:08.052 Transport Address: 10.0.0.1 00:27:08.052 Discovery Log Entry 1 00:27:08.052 ---------------------- 00:27:08.052 Transport Type: 3 (TCP) 00:27:08.052 Address Family: 1 (IPv4) 00:27:08.052 Subsystem Type: 2 (NVM Subsystem) 00:27:08.052 Entry Flags: 00:27:08.052 Duplicate Returned Information: 0 00:27:08.052 Explicit Persistent Connection Support for Discovery: 0 00:27:08.052 Transport Requirements: 00:27:08.052 Secure Channel: Not Specified 00:27:08.052 Port ID: 1 (0x0001) 00:27:08.052 Controller ID: 65535 (0xffff) 00:27:08.052 Admin Max SQ Size: 32 00:27:08.052 Transport Service Identifier: 4420 00:27:08.052 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:08.052 Transport Address: 10.0.0.1 00:27:08.052 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:08.052 get_feature(0x01) failed 00:27:08.052 get_feature(0x02) failed 00:27:08.052 get_feature(0x04) failed 00:27:08.052 ===================================================== 00:27:08.052 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:08.052 ===================================================== 00:27:08.052 Controller Capabilities/Features 00:27:08.052 ================================ 00:27:08.052 Vendor ID: 0000 00:27:08.052 Subsystem Vendor ID: 0000 00:27:08.052 Serial Number: 80ca65895638b9169ad8 00:27:08.052 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:08.052 Firmware Version: 6.8.9-20 00:27:08.052 Recommended Arb Burst: 6 00:27:08.052 IEEE OUI Identifier: 00 00 00 00:27:08.052 Multi-path I/O 00:27:08.052 May have multiple subsystem ports: Yes 00:27:08.052 May have multiple controllers: Yes 00:27:08.052 Associated with SR-IOV VF: No 00:27:08.052 Max Data Transfer Size: Unlimited 00:27:08.052 Max Number of Namespaces: 1024 00:27:08.052 Max Number of I/O Queues: 128 00:27:08.052 NVMe Specification Version (VS): 1.3 00:27:08.052 NVMe Specification Version (Identify): 1.3 00:27:08.052 Maximum Queue Entries: 1024 00:27:08.052 Contiguous Queues Required: No 00:27:08.052 Arbitration Mechanisms Supported 00:27:08.052 Weighted Round Robin: Not Supported 00:27:08.052 Vendor Specific: Not Supported 00:27:08.052 Reset Timeout: 7500 ms 00:27:08.052 Doorbell Stride: 4 bytes 00:27:08.052 NVM Subsystem Reset: Not Supported 00:27:08.052 Command Sets Supported 00:27:08.052 NVM Command Set: Supported 00:27:08.052 Boot Partition: Not Supported 00:27:08.052 Memory Page Size Minimum: 4096 bytes 00:27:08.052 Memory Page Size Maximum: 4096 bytes 00:27:08.052 Persistent Memory Region: Not Supported 00:27:08.052 Optional Asynchronous Events Supported 00:27:08.052 Namespace Attribute Notices: Supported 00:27:08.052 Firmware Activation Notices: Not Supported 00:27:08.052 ANA Change Notices: Supported 00:27:08.052 PLE Aggregate Log Change Notices: Not Supported 00:27:08.052 LBA Status Info Alert Notices: Not Supported 00:27:08.052 EGE Aggregate Log Change Notices: Not Supported 00:27:08.052 Normal NVM Subsystem Shutdown event: Not Supported 00:27:08.052 Zone Descriptor Change Notices: Not Supported 00:27:08.052 Discovery Log Change Notices: Not Supported 00:27:08.052 Controller Attributes 00:27:08.052 128-bit Host Identifier: Supported 00:27:08.052 Non-Operational Permissive Mode: Not Supported 00:27:08.052 NVM Sets: Not Supported 00:27:08.052 Read Recovery Levels: Not Supported 00:27:08.052 Endurance Groups: Not Supported 00:27:08.052 Predictable Latency Mode: Not Supported 00:27:08.052 Traffic Based Keep ALive: Supported 00:27:08.052 Namespace Granularity: Not Supported 00:27:08.052 SQ Associations: Not Supported 00:27:08.052 UUID List: Not Supported 00:27:08.052 Multi-Domain Subsystem: Not Supported 00:27:08.052 Fixed Capacity Management: Not Supported 00:27:08.052 Variable Capacity Management: Not Supported 00:27:08.052 Delete Endurance Group: Not Supported 00:27:08.052 Delete NVM Set: Not Supported 00:27:08.052 Extended LBA Formats Supported: Not Supported 00:27:08.052 Flexible Data Placement Supported: Not Supported 00:27:08.052 00:27:08.052 Controller Memory Buffer Support 00:27:08.052 ================================ 00:27:08.052 Supported: No 00:27:08.052 00:27:08.052 Persistent Memory Region Support 00:27:08.052 ================================ 00:27:08.052 Supported: No 00:27:08.052 00:27:08.052 Admin Command Set Attributes 00:27:08.052 ============================ 00:27:08.052 Security Send/Receive: Not Supported 00:27:08.052 Format NVM: Not Supported 00:27:08.052 Firmware Activate/Download: Not Supported 00:27:08.052 Namespace Management: Not Supported 00:27:08.052 Device Self-Test: Not Supported 00:27:08.052 Directives: Not Supported 00:27:08.052 NVMe-MI: Not Supported 00:27:08.052 Virtualization Management: Not Supported 00:27:08.052 Doorbell Buffer Config: Not Supported 00:27:08.052 Get LBA Status Capability: Not Supported 00:27:08.052 Command & Feature Lockdown Capability: Not Supported 00:27:08.052 Abort Command Limit: 4 00:27:08.052 Async Event Request Limit: 4 00:27:08.052 Number of Firmware Slots: N/A 00:27:08.052 Firmware Slot 1 Read-Only: N/A 00:27:08.052 Firmware Activation Without Reset: N/A 00:27:08.052 Multiple Update Detection Support: N/A 00:27:08.052 Firmware Update Granularity: No Information Provided 00:27:08.052 Per-Namespace SMART Log: Yes 00:27:08.052 Asymmetric Namespace Access Log Page: Supported 00:27:08.052 ANA Transition Time : 10 sec 00:27:08.052 00:27:08.052 Asymmetric Namespace Access Capabilities 00:27:08.052 ANA Optimized State : Supported 00:27:08.052 ANA Non-Optimized State : Supported 00:27:08.052 ANA Inaccessible State : Supported 00:27:08.052 ANA Persistent Loss State : Supported 00:27:08.052 ANA Change State : Supported 00:27:08.052 ANAGRPID is not changed : No 00:27:08.052 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:08.052 00:27:08.052 ANA Group Identifier Maximum : 128 00:27:08.052 Number of ANA Group Identifiers : 128 00:27:08.052 Max Number of Allowed Namespaces : 1024 00:27:08.052 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:08.052 Command Effects Log Page: Supported 00:27:08.052 Get Log Page Extended Data: Supported 00:27:08.052 Telemetry Log Pages: Not Supported 00:27:08.053 Persistent Event Log Pages: Not Supported 00:27:08.053 Supported Log Pages Log Page: May Support 00:27:08.053 Commands Supported & Effects Log Page: Not Supported 00:27:08.053 Feature Identifiers & Effects Log Page:May Support 00:27:08.053 NVMe-MI Commands & Effects Log Page: May Support 00:27:08.053 Data Area 4 for Telemetry Log: Not Supported 00:27:08.053 Error Log Page Entries Supported: 128 00:27:08.053 Keep Alive: Supported 00:27:08.053 Keep Alive Granularity: 1000 ms 00:27:08.053 00:27:08.053 NVM Command Set Attributes 00:27:08.053 ========================== 00:27:08.053 Submission Queue Entry Size 00:27:08.053 Max: 64 00:27:08.053 Min: 64 00:27:08.053 Completion Queue Entry Size 00:27:08.053 Max: 16 00:27:08.053 Min: 16 00:27:08.053 Number of Namespaces: 1024 00:27:08.053 Compare Command: Not Supported 00:27:08.053 Write Uncorrectable Command: Not Supported 00:27:08.053 Dataset Management Command: Supported 00:27:08.053 Write Zeroes Command: Supported 00:27:08.053 Set Features Save Field: Not Supported 00:27:08.053 Reservations: Not Supported 00:27:08.053 Timestamp: Not Supported 00:27:08.053 Copy: Not Supported 00:27:08.053 Volatile Write Cache: Present 00:27:08.053 Atomic Write Unit (Normal): 1 00:27:08.053 Atomic Write Unit (PFail): 1 00:27:08.053 Atomic Compare & Write Unit: 1 00:27:08.053 Fused Compare & Write: Not Supported 00:27:08.053 Scatter-Gather List 00:27:08.053 SGL Command Set: Supported 00:27:08.053 SGL Keyed: Not Supported 00:27:08.053 SGL Bit Bucket Descriptor: Not Supported 00:27:08.053 SGL Metadata Pointer: Not Supported 00:27:08.053 Oversized SGL: Not Supported 00:27:08.053 SGL Metadata Address: Not Supported 00:27:08.053 SGL Offset: Supported 00:27:08.053 Transport SGL Data Block: Not Supported 00:27:08.053 Replay Protected Memory Block: Not Supported 00:27:08.053 00:27:08.053 Firmware Slot Information 00:27:08.053 ========================= 00:27:08.053 Active slot: 0 00:27:08.053 00:27:08.053 Asymmetric Namespace Access 00:27:08.053 =========================== 00:27:08.053 Change Count : 0 00:27:08.053 Number of ANA Group Descriptors : 1 00:27:08.053 ANA Group Descriptor : 0 00:27:08.053 ANA Group ID : 1 00:27:08.053 Number of NSID Values : 1 00:27:08.053 Change Count : 0 00:27:08.053 ANA State : 1 00:27:08.053 Namespace Identifier : 1 00:27:08.053 00:27:08.053 Commands Supported and Effects 00:27:08.053 ============================== 00:27:08.053 Admin Commands 00:27:08.053 -------------- 00:27:08.053 Get Log Page (02h): Supported 00:27:08.053 Identify (06h): Supported 00:27:08.053 Abort (08h): Supported 00:27:08.053 Set Features (09h): Supported 00:27:08.053 Get Features (0Ah): Supported 00:27:08.053 Asynchronous Event Request (0Ch): Supported 00:27:08.053 Keep Alive (18h): Supported 00:27:08.053 I/O Commands 00:27:08.053 ------------ 00:27:08.053 Flush (00h): Supported 00:27:08.053 Write (01h): Supported LBA-Change 00:27:08.053 Read (02h): Supported 00:27:08.053 Write Zeroes (08h): Supported LBA-Change 00:27:08.053 Dataset Management (09h): Supported 00:27:08.053 00:27:08.053 Error Log 00:27:08.053 ========= 00:27:08.053 Entry: 0 00:27:08.053 Error Count: 0x3 00:27:08.053 Submission Queue Id: 0x0 00:27:08.053 Command Id: 0x5 00:27:08.053 Phase Bit: 0 00:27:08.053 Status Code: 0x2 00:27:08.053 Status Code Type: 0x0 00:27:08.053 Do Not Retry: 1 00:27:08.312 Error Location: 0x28 00:27:08.312 LBA: 0x0 00:27:08.312 Namespace: 0x0 00:27:08.312 Vendor Log Page: 0x0 00:27:08.312 ----------- 00:27:08.312 Entry: 1 00:27:08.312 Error Count: 0x2 00:27:08.312 Submission Queue Id: 0x0 00:27:08.312 Command Id: 0x5 00:27:08.312 Phase Bit: 0 00:27:08.312 Status Code: 0x2 00:27:08.312 Status Code Type: 0x0 00:27:08.312 Do Not Retry: 1 00:27:08.312 Error Location: 0x28 00:27:08.312 LBA: 0x0 00:27:08.312 Namespace: 0x0 00:27:08.312 Vendor Log Page: 0x0 00:27:08.312 ----------- 00:27:08.312 Entry: 2 00:27:08.312 Error Count: 0x1 00:27:08.312 Submission Queue Id: 0x0 00:27:08.312 Command Id: 0x4 00:27:08.312 Phase Bit: 0 00:27:08.312 Status Code: 0x2 00:27:08.312 Status Code Type: 0x0 00:27:08.312 Do Not Retry: 1 00:27:08.312 Error Location: 0x28 00:27:08.312 LBA: 0x0 00:27:08.312 Namespace: 0x0 00:27:08.312 Vendor Log Page: 0x0 00:27:08.312 00:27:08.312 Number of Queues 00:27:08.312 ================ 00:27:08.312 Number of I/O Submission Queues: 128 00:27:08.312 Number of I/O Completion Queues: 128 00:27:08.312 00:27:08.312 ZNS Specific Controller Data 00:27:08.312 ============================ 00:27:08.312 Zone Append Size Limit: 0 00:27:08.312 00:27:08.312 00:27:08.312 Active Namespaces 00:27:08.312 ================= 00:27:08.312 get_feature(0x05) failed 00:27:08.312 Namespace ID:1 00:27:08.312 Command Set Identifier: NVM (00h) 00:27:08.312 Deallocate: Supported 00:27:08.312 Deallocated/Unwritten Error: Not Supported 00:27:08.312 Deallocated Read Value: Unknown 00:27:08.312 Deallocate in Write Zeroes: Not Supported 00:27:08.312 Deallocated Guard Field: 0xFFFF 00:27:08.312 Flush: Supported 00:27:08.312 Reservation: Not Supported 00:27:08.312 Namespace Sharing Capabilities: Multiple Controllers 00:27:08.312 Size (in LBAs): 1953525168 (931GiB) 00:27:08.312 Capacity (in LBAs): 1953525168 (931GiB) 00:27:08.312 Utilization (in LBAs): 1953525168 (931GiB) 00:27:08.312 UUID: f042bb3c-313d-442d-9d5e-90e221c02ac1 00:27:08.312 Thin Provisioning: Not Supported 00:27:08.312 Per-NS Atomic Units: Yes 00:27:08.312 Atomic Boundary Size (Normal): 0 00:27:08.312 Atomic Boundary Size (PFail): 0 00:27:08.312 Atomic Boundary Offset: 0 00:27:08.312 NGUID/EUI64 Never Reused: No 00:27:08.312 ANA group ID: 1 00:27:08.312 Namespace Write Protected: No 00:27:08.312 Number of LBA Formats: 1 00:27:08.312 Current LBA Format: LBA Format #00 00:27:08.312 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:08.312 00:27:08.312 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:08.312 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:08.312 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:08.312 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:08.312 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:08.312 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:08.312 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:08.312 rmmod nvme_tcp 00:27:08.312 rmmod nvme_fabrics 00:27:08.312 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:08.312 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:08.312 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:08.312 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:08.313 10:37:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:10.221 10:37:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:10.221 10:37:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:10.221 10:37:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:10.221 10:37:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:10.221 10:37:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.221 10:37:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:10.221 10:37:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:10.221 10:37:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.221 10:37:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:10.221 10:37:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:10.221 10:37:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:11.597 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:11.597 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:11.597 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:11.597 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:11.597 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:11.597 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:11.597 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:11.597 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:11.597 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:11.597 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:11.597 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:11.597 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:11.597 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:11.597 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:11.597 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:11.597 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:12.533 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:12.796 00:27:12.796 real 0m10.134s 00:27:12.796 user 0m2.239s 00:27:12.796 sys 0m3.816s 00:27:12.796 10:37:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:12.796 10:37:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:12.796 ************************************ 00:27:12.796 END TEST nvmf_identify_kernel_target 00:27:12.796 ************************************ 00:27:12.796 10:37:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:12.796 10:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:12.796 10:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:12.796 10:37:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.796 ************************************ 00:27:12.796 START TEST nvmf_auth_host 00:27:12.796 ************************************ 00:27:12.796 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:12.796 * Looking for test storage... 00:27:12.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:12.796 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:12.796 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:27:12.796 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:13.062 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:13.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.063 --rc genhtml_branch_coverage=1 00:27:13.063 --rc genhtml_function_coverage=1 00:27:13.063 --rc genhtml_legend=1 00:27:13.063 --rc geninfo_all_blocks=1 00:27:13.063 --rc geninfo_unexecuted_blocks=1 00:27:13.063 00:27:13.063 ' 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:13.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.063 --rc genhtml_branch_coverage=1 00:27:13.063 --rc genhtml_function_coverage=1 00:27:13.063 --rc genhtml_legend=1 00:27:13.063 --rc geninfo_all_blocks=1 00:27:13.063 --rc geninfo_unexecuted_blocks=1 00:27:13.063 00:27:13.063 ' 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:13.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.063 --rc genhtml_branch_coverage=1 00:27:13.063 --rc genhtml_function_coverage=1 00:27:13.063 --rc genhtml_legend=1 00:27:13.063 --rc geninfo_all_blocks=1 00:27:13.063 --rc geninfo_unexecuted_blocks=1 00:27:13.063 00:27:13.063 ' 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:13.063 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.063 --rc genhtml_branch_coverage=1 00:27:13.063 --rc genhtml_function_coverage=1 00:27:13.063 --rc genhtml_legend=1 00:27:13.063 --rc geninfo_all_blocks=1 00:27:13.063 --rc geninfo_unexecuted_blocks=1 00:27:13.063 00:27:13.063 ' 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:13.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:13.063 10:37:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:15.596 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:15.597 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:15.597 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:15.597 Found net devices under 0000:09:00.0: cvl_0_0 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:15.597 Found net devices under 0000:09:00.1: cvl_0_1 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:15.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:15.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:27:15.597 00:27:15.597 --- 10.0.0.2 ping statistics --- 00:27:15.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.597 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:15.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:15.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:27:15.597 00:27:15.597 --- 10.0.0.1 ping statistics --- 00:27:15.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:15.597 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2637207 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2637207 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2637207 ']' 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.597 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8bc74dc966bfe9c3820e8a2aacff7146 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.w2e 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8bc74dc966bfe9c3820e8a2aacff7146 0 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8bc74dc966bfe9c3820e8a2aacff7146 0 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8bc74dc966bfe9c3820e8a2aacff7146 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.w2e 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.w2e 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.w2e 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fda571e6640eb622184f8828641948b35639e7ce79c02c08eac748419f74a6c8 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ikj 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fda571e6640eb622184f8828641948b35639e7ce79c02c08eac748419f74a6c8 3 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fda571e6640eb622184f8828641948b35639e7ce79c02c08eac748419f74a6c8 3 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fda571e6640eb622184f8828641948b35639e7ce79c02c08eac748419f74a6c8 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:15.598 10:37:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ikj 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ikj 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Ikj 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2d3cc4990e54de3b62b04330a4551e2a0fadcd994391c7e0 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PBZ 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2d3cc4990e54de3b62b04330a4551e2a0fadcd994391c7e0 0 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2d3cc4990e54de3b62b04330a4551e2a0fadcd994391c7e0 0 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2d3cc4990e54de3b62b04330a4551e2a0fadcd994391c7e0 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:15.598 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PBZ 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PBZ 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.PBZ 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c3062a2b158a214f9da9a5dee239d3be1fcb8bce87209b44 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.iQb 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c3062a2b158a214f9da9a5dee239d3be1fcb8bce87209b44 2 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c3062a2b158a214f9da9a5dee239d3be1fcb8bce87209b44 2 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c3062a2b158a214f9da9a5dee239d3be1fcb8bce87209b44 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.iQb 00:27:15.856 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.iQb 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.iQb 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0587261f3fee587777ae2c12b9613fcb 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.GZ6 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0587261f3fee587777ae2c12b9613fcb 1 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0587261f3fee587777ae2c12b9613fcb 1 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0587261f3fee587777ae2c12b9613fcb 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.GZ6 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.GZ6 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.GZ6 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=aebe61d9fa1ca648bff94f8a36127278 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.3Eq 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key aebe61d9fa1ca648bff94f8a36127278 1 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 aebe61d9fa1ca648bff94f8a36127278 1 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=aebe61d9fa1ca648bff94f8a36127278 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.3Eq 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.3Eq 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.3Eq 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2e604930ec34a4576b10b72f3f5bfe9ab4bd7e7715f6251b 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.NBj 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2e604930ec34a4576b10b72f3f5bfe9ab4bd7e7715f6251b 2 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2e604930ec34a4576b10b72f3f5bfe9ab4bd7e7715f6251b 2 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2e604930ec34a4576b10b72f3f5bfe9ab4bd7e7715f6251b 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.NBj 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.NBj 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.NBj 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c304d096c4b0d341cb926c6bc1dc208e 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kjo 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c304d096c4b0d341cb926c6bc1dc208e 0 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c304d096c4b0d341cb926c6bc1dc208e 0 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c304d096c4b0d341cb926c6bc1dc208e 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:15.857 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kjo 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kjo 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.kjo 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=58f98653733899d57e0bbdd77abf6f8895b4b01039d29a81df760d8ac69f595d 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.BGw 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 58f98653733899d57e0bbdd77abf6f8895b4b01039d29a81df760d8ac69f595d 3 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 58f98653733899d57e0bbdd77abf6f8895b4b01039d29a81df760d8ac69f595d 3 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=58f98653733899d57e0bbdd77abf6f8895b4b01039d29a81df760d8ac69f595d 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.BGw 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.BGw 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.BGw 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2637207 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2637207 ']' 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:16.115 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.w2e 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Ikj ]] 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ikj 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.PBZ 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.iQb ]] 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.iQb 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.374 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.GZ6 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.3Eq ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3Eq 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.NBj 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.kjo ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.kjo 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.BGw 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:16.375 10:37:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:17.309 Waiting for block devices as requested 00:27:17.567 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:17.567 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:17.567 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:17.826 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:17.826 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:17.826 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:17.826 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:18.083 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:18.083 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:18.340 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:18.340 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:18.340 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:18.340 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:18.597 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:18.597 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:18.598 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:18.598 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:19.161 No valid GPT data, bailing 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:19.161 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:27:19.418 00:27:19.418 Discovery Log Number of Records 2, Generation counter 2 00:27:19.419 =====Discovery Log Entry 0====== 00:27:19.419 trtype: tcp 00:27:19.419 adrfam: ipv4 00:27:19.419 subtype: current discovery subsystem 00:27:19.419 treq: not specified, sq flow control disable supported 00:27:19.419 portid: 1 00:27:19.419 trsvcid: 4420 00:27:19.419 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:19.419 traddr: 10.0.0.1 00:27:19.419 eflags: none 00:27:19.419 sectype: none 00:27:19.419 =====Discovery Log Entry 1====== 00:27:19.419 trtype: tcp 00:27:19.419 adrfam: ipv4 00:27:19.419 subtype: nvme subsystem 00:27:19.419 treq: not specified, sq flow control disable supported 00:27:19.419 portid: 1 00:27:19.419 trsvcid: 4420 00:27:19.419 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:19.419 traddr: 10.0.0.1 00:27:19.419 eflags: none 00:27:19.419 sectype: none 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.419 nvme0n1 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.419 10:37:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.677 nvme0n1 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.677 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.933 nvme0n1 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.933 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.934 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.190 nvme0n1 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.190 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.447 nvme0n1 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.447 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 nvme0n1 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.705 10:37:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.977 nvme0n1 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.977 nvme0n1 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.977 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.235 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.235 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.235 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.235 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.235 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.235 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.235 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.235 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:21.235 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.235 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.235 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.236 nvme0n1 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.236 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.493 nvme0n1 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:21.493 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:21.494 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:21.494 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.494 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.494 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:21.494 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:21.494 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.494 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:21.494 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.494 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.751 10:37:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.751 nvme0n1 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:21.751 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.008 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.009 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.009 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.009 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.009 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.009 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.009 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.009 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.266 nvme0n1 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.266 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.267 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.524 nvme0n1 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:22.524 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.525 10:37:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.790 nvme0n1 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.791 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.438 nvme0n1 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.438 nvme0n1 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.438 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.696 10:37:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.261 nvme0n1 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:24.261 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.262 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.520 nvme0n1 00:27:24.520 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.520 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.520 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.520 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.520 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.778 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.778 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.778 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.778 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.778 10:37:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.778 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.347 nvme0n1 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.347 10:37:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.606 nvme0n1 00:27:25.606 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.606 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.606 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.606 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.606 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.864 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.429 nvme0n1 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.429 10:37:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.362 nvme0n1 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.362 10:37:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.296 nvme0n1 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.296 10:38:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.229 nvme0n1 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:29.229 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.230 10:38:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.161 nvme0n1 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.161 10:38:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.725 nvme0n1 00:27:30.725 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.725 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.725 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.725 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.725 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.725 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.997 nvme0n1 00:27:30.997 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.998 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.255 nvme0n1 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.255 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.513 nvme0n1 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.513 10:38:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.770 nvme0n1 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.770 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.027 nvme0n1 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:32.027 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.028 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.285 nvme0n1 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.285 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.543 nvme0n1 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.543 10:38:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.807 nvme0n1 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.807 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.067 nvme0n1 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.067 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.326 nvme0n1 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.326 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.584 nvme0n1 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:33.584 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.585 10:38:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.843 nvme0n1 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.843 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.101 nvme0n1 00:27:34.101 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.101 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.101 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.101 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.101 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.358 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.358 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.359 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.617 nvme0n1 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.617 10:38:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.875 nvme0n1 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.875 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.876 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.876 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:34.876 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.876 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.447 nvme0n1 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:35.447 10:38:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.013 nvme0n1 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.013 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.580 nvme0n1 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.580 10:38:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.146 nvme0n1 00:27:37.146 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.146 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.146 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.146 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.146 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.146 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.146 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.146 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.146 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.146 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.147 10:38:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.712 nvme0n1 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.712 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.643 nvme0n1 00:27:38.643 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.643 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.643 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.643 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.643 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.643 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.643 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.643 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.643 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.643 10:38:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.643 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.643 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.643 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:38.643 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.643 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.643 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.643 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.643 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:38.643 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:38.643 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.643 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.644 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.575 nvme0n1 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:39.575 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.576 10:38:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.507 nvme0n1 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.507 10:38:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.438 nvme0n1 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:41.438 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.439 10:38:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.373 nvme0n1 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.373 nvme0n1 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.373 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.632 10:38:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.632 nvme0n1 00:27:42.632 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.632 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.632 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.632 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.632 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.632 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.632 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.632 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.632 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.632 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.899 nvme0n1 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.899 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.900 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.157 nvme0n1 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:43.157 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.158 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.416 nvme0n1 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.416 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.674 nvme0n1 00:27:43.674 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.674 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.674 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.674 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.674 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.674 10:38:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.674 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.674 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.674 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.674 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.674 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.674 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.674 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.675 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.933 nvme0n1 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.933 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.934 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.934 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.934 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.934 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.934 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.934 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.192 nvme0n1 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.192 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.451 nvme0n1 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:44.451 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.452 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.711 nvme0n1 00:27:44.711 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.711 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.711 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.711 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.711 10:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.711 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.969 nvme0n1 00:27:44.969 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.969 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.969 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.969 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.969 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.969 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.969 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.970 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.228 nvme0n1 00:27:45.228 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.486 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.745 nvme0n1 00:27:45.745 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.745 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.745 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.745 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.745 10:38:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.745 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.004 nvme0n1 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.004 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.262 nvme0n1 00:27:46.262 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.262 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.262 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.262 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.262 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.262 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.262 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.262 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.262 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.262 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.521 10:38:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.779 nvme0n1 00:27:46.779 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.779 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.779 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.779 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.779 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.779 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.037 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.038 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.038 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.605 nvme0n1 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.605 10:38:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.189 nvme0n1 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.190 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.448 nvme0n1 00:27:48.448 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:48.705 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.706 10:38:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.271 nvme0n1 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJjNzRkYzk2NmJmZTljMzgyMGU4YTJhYWNmZjcxNDYL56dd: 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: ]] 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmRhNTcxZTY2NDBlYjYyMjE4NGY4ODI4NjQxOTQ4YjM1NjM5ZTdjZTc5YzAyYzA4ZWFjNzQ4NDE5Zjc0YTZjOO18T4s=: 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.271 10:38:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.212 nvme0n1 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.212 10:38:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.144 nvme0n1 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.144 10:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.075 nvme0n1 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmU2MDQ5MzBlYzM0YTQ1NzZiMTBiNzJmM2Y1YmZlOWFiNGJkN2U3NzE1ZjYyNTFiXcKJWQ==: 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: ]] 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzMwNGQwOTZjNGIwZDM0MWNiOTI2YzZiYzFkYzIwOGWFReSu: 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.075 10:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.657 nvme0n1 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NThmOTg2NTM3MzM4OTlkNTdlMGJiZGQ3N2FiZjZmODg5NWI0YjAxMDM5ZDI5YTgxZGY3NjBkOGFjNjlmNTk1ZEujlnw=: 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.657 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.915 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.849 nvme0n1 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.849 10:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.849 request: 00:27:53.849 { 00:27:53.849 "name": "nvme0", 00:27:53.849 "trtype": "tcp", 00:27:53.849 "traddr": "10.0.0.1", 00:27:53.849 "adrfam": "ipv4", 00:27:53.849 "trsvcid": "4420", 00:27:53.849 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:53.849 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:53.849 "prchk_reftag": false, 00:27:53.849 "prchk_guard": false, 00:27:53.849 "hdgst": false, 00:27:53.849 "ddgst": false, 00:27:53.849 "allow_unrecognized_csi": false, 00:27:53.849 "method": "bdev_nvme_attach_controller", 00:27:53.849 "req_id": 1 00:27:53.849 } 00:27:53.849 Got JSON-RPC error response 00:27:53.849 response: 00:27:53.849 { 00:27:53.849 "code": -5, 00:27:53.849 "message": "Input/output error" 00:27:53.849 } 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.849 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.849 request: 00:27:53.849 { 00:27:53.849 "name": "nvme0", 00:27:53.849 "trtype": "tcp", 00:27:53.849 "traddr": "10.0.0.1", 00:27:53.849 "adrfam": "ipv4", 00:27:53.849 "trsvcid": "4420", 00:27:53.849 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:53.850 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:53.850 "prchk_reftag": false, 00:27:53.850 "prchk_guard": false, 00:27:53.850 "hdgst": false, 00:27:53.850 "ddgst": false, 00:27:53.850 "dhchap_key": "key2", 00:27:53.850 "allow_unrecognized_csi": false, 00:27:53.850 "method": "bdev_nvme_attach_controller", 00:27:53.850 "req_id": 1 00:27:53.850 } 00:27:53.850 Got JSON-RPC error response 00:27:53.850 response: 00:27:53.850 { 00:27:53.850 "code": -5, 00:27:53.850 "message": "Input/output error" 00:27:53.850 } 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.850 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.108 request: 00:27:54.108 { 00:27:54.108 "name": "nvme0", 00:27:54.108 "trtype": "tcp", 00:27:54.108 "traddr": "10.0.0.1", 00:27:54.108 "adrfam": "ipv4", 00:27:54.108 "trsvcid": "4420", 00:27:54.108 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:54.108 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:54.108 "prchk_reftag": false, 00:27:54.108 "prchk_guard": false, 00:27:54.108 "hdgst": false, 00:27:54.108 "ddgst": false, 00:27:54.108 "dhchap_key": "key1", 00:27:54.108 "dhchap_ctrlr_key": "ckey2", 00:27:54.108 "allow_unrecognized_csi": false, 00:27:54.108 "method": "bdev_nvme_attach_controller", 00:27:54.108 "req_id": 1 00:27:54.108 } 00:27:54.108 Got JSON-RPC error response 00:27:54.108 response: 00:27:54.108 { 00:27:54.108 "code": -5, 00:27:54.108 "message": "Input/output error" 00:27:54.108 } 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.108 nvme0n1 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:54.108 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.109 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.368 request: 00:27:54.368 { 00:27:54.368 "name": "nvme0", 00:27:54.368 "dhchap_key": "key1", 00:27:54.368 "dhchap_ctrlr_key": "ckey2", 00:27:54.368 "method": "bdev_nvme_set_keys", 00:27:54.368 "req_id": 1 00:27:54.368 } 00:27:54.368 Got JSON-RPC error response 00:27:54.368 response: 00:27:54.368 { 00:27:54.368 "code": -13, 00:27:54.368 "message": "Permission denied" 00:27:54.368 } 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:54.368 10:38:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:55.302 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.302 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:55.302 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.302 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.302 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.560 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:55.560 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:55.560 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.560 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.560 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.560 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:55.560 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:55.560 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:55.560 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.560 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmQzY2M0OTkwZTU0ZGUzYjYyYjA0MzMwYTQ1NTFlMmEwZmFkY2Q5OTQzOTFjN2UwM5/V0w==: 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: ]] 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzMwNjJhMmIxNThhMjE0ZjlkYTlhNWRlZTIzOWQzYmUxZmNiOGJjZTg3MjA5YjQ0d9p10g==: 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.561 nvme0n1 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDU4NzI2MWYzZmVlNTg3Nzc3YWUyYzEyYjk2MTNmY2LFIWbE: 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: ]] 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWViZTYxZDlmYTFjYTY0OGJmZjk0ZjhhMzYxMjcyNzhskion: 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.561 request: 00:27:55.561 { 00:27:55.561 "name": "nvme0", 00:27:55.561 "dhchap_key": "key2", 00:27:55.561 "dhchap_ctrlr_key": "ckey1", 00:27:55.561 "method": "bdev_nvme_set_keys", 00:27:55.561 "req_id": 1 00:27:55.561 } 00:27:55.561 Got JSON-RPC error response 00:27:55.561 response: 00:27:55.561 { 00:27:55.561 "code": -13, 00:27:55.561 "message": "Permission denied" 00:27:55.561 } 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.561 10:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.818 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:55.818 10:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:56.750 rmmod nvme_tcp 00:27:56.750 rmmod nvme_fabrics 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2637207 ']' 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2637207 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2637207 ']' 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2637207 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2637207 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2637207' 00:27:56.750 killing process with pid 2637207 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2637207 00:27:56.750 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2637207 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:57.009 10:38:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:59.545 10:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:00.477 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:00.477 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:00.477 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:00.477 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:00.477 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:00.477 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:00.477 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:00.477 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:00.477 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:00.477 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:00.477 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:00.477 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:00.477 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:00.477 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:00.477 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:00.477 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:01.415 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:28:01.673 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.w2e /tmp/spdk.key-null.PBZ /tmp/spdk.key-sha256.GZ6 /tmp/spdk.key-sha384.NBj /tmp/spdk.key-sha512.BGw /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:01.673 10:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:03.047 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:03.047 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:03.047 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:03.047 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:03.047 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:03.047 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:03.047 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:03.047 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:03.047 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:03.047 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:03.047 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:03.047 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:03.047 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:03.047 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:03.047 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:03.047 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:03.047 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:03.047 00:28:03.047 real 0m50.151s 00:28:03.047 user 0m47.284s 00:28:03.047 sys 0m6.138s 00:28:03.047 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.047 10:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.047 ************************************ 00:28:03.047 END TEST nvmf_auth_host 00:28:03.047 ************************************ 00:28:03.047 10:38:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:03.047 10:38:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:03.047 10:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:03.047 10:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.047 10:38:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.047 ************************************ 00:28:03.048 START TEST nvmf_digest 00:28:03.048 ************************************ 00:28:03.048 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:03.048 * Looking for test storage... 00:28:03.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.048 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:03.048 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:28:03.048 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:03.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.306 --rc genhtml_branch_coverage=1 00:28:03.306 --rc genhtml_function_coverage=1 00:28:03.306 --rc genhtml_legend=1 00:28:03.306 --rc geninfo_all_blocks=1 00:28:03.306 --rc geninfo_unexecuted_blocks=1 00:28:03.306 00:28:03.306 ' 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:03.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.306 --rc genhtml_branch_coverage=1 00:28:03.306 --rc genhtml_function_coverage=1 00:28:03.306 --rc genhtml_legend=1 00:28:03.306 --rc geninfo_all_blocks=1 00:28:03.306 --rc geninfo_unexecuted_blocks=1 00:28:03.306 00:28:03.306 ' 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:03.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.306 --rc genhtml_branch_coverage=1 00:28:03.306 --rc genhtml_function_coverage=1 00:28:03.306 --rc genhtml_legend=1 00:28:03.306 --rc geninfo_all_blocks=1 00:28:03.306 --rc geninfo_unexecuted_blocks=1 00:28:03.306 00:28:03.306 ' 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:03.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.306 --rc genhtml_branch_coverage=1 00:28:03.306 --rc genhtml_function_coverage=1 00:28:03.306 --rc genhtml_legend=1 00:28:03.306 --rc geninfo_all_blocks=1 00:28:03.306 --rc geninfo_unexecuted_blocks=1 00:28:03.306 00:28:03.306 ' 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.306 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:03.307 10:38:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.840 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:05.840 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:05.840 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:05.840 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:05.840 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:05.840 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:05.840 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:05.840 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:05.840 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:05.841 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:05.841 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:05.841 Found net devices under 0000:09:00.0: cvl_0_0 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:05.841 Found net devices under 0000:09:00.1: cvl_0_1 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:05.841 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:05.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:05.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:28:05.842 00:28:05.842 --- 10.0.0.2 ping statistics --- 00:28:05.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.842 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:05.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:05.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:28:05.842 00:28:05.842 --- 10.0.0.1 ping statistics --- 00:28:05.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:05.842 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:05.842 ************************************ 00:28:05.842 START TEST nvmf_digest_clean 00:28:05.842 ************************************ 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2646701 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2646701 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2646701 ']' 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:05.842 10:38:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.842 [2024-12-09 10:38:37.963235] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:05.842 [2024-12-09 10:38:37.963317] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:05.842 [2024-12-09 10:38:38.034802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.842 [2024-12-09 10:38:38.090742] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:05.842 [2024-12-09 10:38:38.090800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:05.842 [2024-12-09 10:38:38.090830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:05.842 [2024-12-09 10:38:38.090848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:05.842 [2024-12-09 10:38:38.090862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:05.842 [2024-12-09 10:38:38.091472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.842 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.842 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:05.842 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:05.842 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:05.842 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:05.842 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.842 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:05.842 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:05.842 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:05.842 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.842 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.100 null0 00:28:06.100 [2024-12-09 10:38:38.317911] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.100 [2024-12-09 10:38:38.342153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:06.100 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.100 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2646731 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2646731 /var/tmp/bperf.sock 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2646731 ']' 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:06.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:06.101 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:06.101 [2024-12-09 10:38:38.391097] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:06.101 [2024-12-09 10:38:38.391210] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2646731 ] 00:28:06.101 [2024-12-09 10:38:38.461952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.101 [2024-12-09 10:38:38.520550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.358 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:06.358 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:06.358 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:06.358 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:06.358 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:06.615 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.615 10:38:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:07.178 nvme0n1 00:28:07.178 10:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:07.178 10:38:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.178 Running I/O for 2 seconds... 00:28:09.041 17998.00 IOPS, 70.30 MiB/s [2024-12-09T09:38:41.739Z] 18283.00 IOPS, 71.42 MiB/s 00:28:09.298 Latency(us) 00:28:09.298 [2024-12-09T09:38:41.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.298 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:09.298 nvme0n1 : 2.05 17940.32 70.08 0.00 0.00 6989.09 3616.62 45438.29 00:28:09.298 [2024-12-09T09:38:41.739Z] =================================================================================================================== 00:28:09.298 [2024-12-09T09:38:41.739Z] Total : 17940.32 70.08 0.00 0.00 6989.09 3616.62 45438.29 00:28:09.298 { 00:28:09.298 "results": [ 00:28:09.298 { 00:28:09.298 "job": "nvme0n1", 00:28:09.299 "core_mask": "0x2", 00:28:09.299 "workload": "randread", 00:28:09.299 "status": "finished", 00:28:09.299 "queue_depth": 128, 00:28:09.299 "io_size": 4096, 00:28:09.299 "runtime": 2.045337, 00:28:09.299 "iops": 17940.319859270134, 00:28:09.299 "mibps": 70.07937445027396, 00:28:09.299 "io_failed": 0, 00:28:09.299 "io_timeout": 0, 00:28:09.299 "avg_latency_us": 6989.086454743837, 00:28:09.299 "min_latency_us": 3616.6162962962962, 00:28:09.299 "max_latency_us": 45438.293333333335 00:28:09.299 } 00:28:09.299 ], 00:28:09.299 "core_count": 1 00:28:09.299 } 00:28:09.299 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:09.299 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:09.299 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:09.299 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:09.299 | select(.opcode=="crc32c") 00:28:09.299 | "\(.module_name) \(.executed)"' 00:28:09.299 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2646731 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2646731 ']' 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2646731 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2646731 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2646731' 00:28:09.556 killing process with pid 2646731 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2646731 00:28:09.556 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.556 00:28:09.556 Latency(us) 00:28:09.556 [2024-12-09T09:38:41.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.556 [2024-12-09T09:38:41.997Z] =================================================================================================================== 00:28:09.556 [2024-12-09T09:38:41.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.556 10:38:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2646731 00:28:09.814 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:09.814 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:09.814 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:09.814 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:09.814 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:09.814 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:09.814 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:09.814 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2647252 00:28:09.815 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:09.815 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2647252 /var/tmp/bperf.sock 00:28:09.815 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2647252 ']' 00:28:09.815 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.815 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.815 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.815 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.815 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:09.815 [2024-12-09 10:38:42.129037] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:09.815 [2024-12-09 10:38:42.129109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2647252 ] 00:28:09.815 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:09.815 Zero copy mechanism will not be used. 00:28:09.815 [2024-12-09 10:38:42.192842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.815 [2024-12-09 10:38:42.249819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.072 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:10.072 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:10.072 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:10.072 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:10.072 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:10.330 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.330 10:38:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.896 nvme0n1 00:28:10.896 10:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:10.896 10:38:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:10.896 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:10.896 Zero copy mechanism will not be used. 00:28:10.896 Running I/O for 2 seconds... 00:28:13.249 4793.00 IOPS, 599.12 MiB/s [2024-12-09T09:38:45.690Z] 4783.00 IOPS, 597.88 MiB/s 00:28:13.249 Latency(us) 00:28:13.249 [2024-12-09T09:38:45.690Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.249 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:13.249 nvme0n1 : 2.05 4684.37 585.55 0.00 0.00 3346.20 813.13 45826.65 00:28:13.249 [2024-12-09T09:38:45.690Z] =================================================================================================================== 00:28:13.249 [2024-12-09T09:38:45.690Z] Total : 4684.37 585.55 0.00 0.00 3346.20 813.13 45826.65 00:28:13.249 { 00:28:13.249 "results": [ 00:28:13.249 { 00:28:13.249 "job": "nvme0n1", 00:28:13.249 "core_mask": "0x2", 00:28:13.249 "workload": "randread", 00:28:13.249 "status": "finished", 00:28:13.249 "queue_depth": 16, 00:28:13.249 "io_size": 131072, 00:28:13.249 "runtime": 2.045525, 00:28:13.249 "iops": 4684.371982742817, 00:28:13.249 "mibps": 585.5464978428521, 00:28:13.249 "io_failed": 0, 00:28:13.249 "io_timeout": 0, 00:28:13.249 "avg_latency_us": 3346.2046819267607, 00:28:13.249 "min_latency_us": 813.1318518518518, 00:28:13.249 "max_latency_us": 45826.654814814814 00:28:13.249 } 00:28:13.249 ], 00:28:13.249 "core_count": 1 00:28:13.249 } 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:13.249 | select(.opcode=="crc32c") 00:28:13.249 | "\(.module_name) \(.executed)"' 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2647252 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2647252 ']' 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2647252 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2647252 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2647252' 00:28:13.249 killing process with pid 2647252 00:28:13.249 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2647252 00:28:13.249 Received shutdown signal, test time was about 2.000000 seconds 00:28:13.249 00:28:13.249 Latency(us) 00:28:13.249 [2024-12-09T09:38:45.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.250 [2024-12-09T09:38:45.691Z] =================================================================================================================== 00:28:13.250 [2024-12-09T09:38:45.691Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.250 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2647252 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2647662 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2647662 /var/tmp/bperf.sock 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2647662 ']' 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:13.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.508 10:38:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:13.766 [2024-12-09 10:38:45.967823] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:13.766 [2024-12-09 10:38:45.967893] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2647662 ] 00:28:13.766 [2024-12-09 10:38:46.033839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.766 [2024-12-09 10:38:46.090368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.766 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.766 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:13.766 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:13.766 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:13.766 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:14.332 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.332 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:14.589 nvme0n1 00:28:14.589 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:14.589 10:38:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:14.589 Running I/O for 2 seconds... 00:28:16.620 21707.00 IOPS, 84.79 MiB/s [2024-12-09T09:38:49.061Z] 21303.00 IOPS, 83.21 MiB/s 00:28:16.620 Latency(us) 00:28:16.620 [2024-12-09T09:38:49.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.620 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:16.620 nvme0n1 : 2.01 21300.87 83.21 0.00 0.00 5996.29 2657.85 13689.74 00:28:16.620 [2024-12-09T09:38:49.061Z] =================================================================================================================== 00:28:16.620 [2024-12-09T09:38:49.061Z] Total : 21300.87 83.21 0.00 0.00 5996.29 2657.85 13689.74 00:28:16.620 { 00:28:16.620 "results": [ 00:28:16.620 { 00:28:16.620 "job": "nvme0n1", 00:28:16.620 "core_mask": "0x2", 00:28:16.620 "workload": "randwrite", 00:28:16.620 "status": "finished", 00:28:16.620 "queue_depth": 128, 00:28:16.620 "io_size": 4096, 00:28:16.620 "runtime": 2.006209, 00:28:16.620 "iops": 21300.871444600238, 00:28:16.620 "mibps": 83.20652908046968, 00:28:16.620 "io_failed": 0, 00:28:16.620 "io_timeout": 0, 00:28:16.621 "avg_latency_us": 5996.288999755595, 00:28:16.621 "min_latency_us": 2657.8488888888887, 00:28:16.621 "max_latency_us": 13689.742222222223 00:28:16.621 } 00:28:16.621 ], 00:28:16.621 "core_count": 1 00:28:16.621 } 00:28:16.621 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:16.621 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:16.621 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:16.621 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:16.621 | select(.opcode=="crc32c") 00:28:16.621 | "\(.module_name) \(.executed)"' 00:28:16.621 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:16.877 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:16.877 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:16.877 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:16.877 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:16.877 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2647662 00:28:16.877 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2647662 ']' 00:28:16.877 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2647662 00:28:16.878 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:17.135 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.135 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2647662 00:28:17.135 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.135 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.136 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2647662' 00:28:17.136 killing process with pid 2647662 00:28:17.136 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2647662 00:28:17.136 Received shutdown signal, test time was about 2.000000 seconds 00:28:17.136 00:28:17.136 Latency(us) 00:28:17.136 [2024-12-09T09:38:49.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:17.136 [2024-12-09T09:38:49.577Z] =================================================================================================================== 00:28:17.136 [2024-12-09T09:38:49.577Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:17.136 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2647662 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2648079 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2648079 /var/tmp/bperf.sock 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2648079 ']' 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:17.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:17.394 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:17.394 [2024-12-09 10:38:49.640376] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:17.394 [2024-12-09 10:38:49.640480] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2648079 ] 00:28:17.394 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:17.394 Zero copy mechanism will not be used. 00:28:17.394 [2024-12-09 10:38:49.707207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:17.394 [2024-12-09 10:38:49.766401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.652 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:17.652 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:17.652 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:17.652 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:17.652 10:38:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:17.910 10:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:17.910 10:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:18.475 nvme0n1 00:28:18.475 10:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:18.475 10:38:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:18.475 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:18.475 Zero copy mechanism will not be used. 00:28:18.475 Running I/O for 2 seconds... 00:28:20.775 6102.00 IOPS, 762.75 MiB/s [2024-12-09T09:38:53.216Z] 6216.50 IOPS, 777.06 MiB/s 00:28:20.775 Latency(us) 00:28:20.775 [2024-12-09T09:38:53.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.775 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:20.776 nvme0n1 : 2.00 6213.82 776.73 0.00 0.00 2567.52 1735.49 12913.02 00:28:20.776 [2024-12-09T09:38:53.217Z] =================================================================================================================== 00:28:20.776 [2024-12-09T09:38:53.217Z] Total : 6213.82 776.73 0.00 0.00 2567.52 1735.49 12913.02 00:28:20.776 { 00:28:20.776 "results": [ 00:28:20.776 { 00:28:20.776 "job": "nvme0n1", 00:28:20.776 "core_mask": "0x2", 00:28:20.776 "workload": "randwrite", 00:28:20.776 "status": "finished", 00:28:20.776 "queue_depth": 16, 00:28:20.776 "io_size": 131072, 00:28:20.776 "runtime": 2.004081, 00:28:20.776 "iops": 6213.820698863968, 00:28:20.776 "mibps": 776.727587357996, 00:28:20.776 "io_failed": 0, 00:28:20.776 "io_timeout": 0, 00:28:20.776 "avg_latency_us": 2567.5225082755605, 00:28:20.776 "min_latency_us": 1735.4903703703703, 00:28:20.776 "max_latency_us": 12913.01925925926 00:28:20.776 } 00:28:20.776 ], 00:28:20.776 "core_count": 1 00:28:20.776 } 00:28:20.776 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:20.776 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:20.776 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:20.776 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:20.776 10:38:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:20.776 | select(.opcode=="crc32c") 00:28:20.776 | "\(.module_name) \(.executed)"' 00:28:20.776 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:20.776 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:20.776 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:20.776 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:20.776 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2648079 00:28:20.776 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2648079 ']' 00:28:20.776 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2648079 00:28:20.776 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:20.776 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.776 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2648079 00:28:21.032 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:21.032 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:21.032 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2648079' 00:28:21.032 killing process with pid 2648079 00:28:21.032 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2648079 00:28:21.032 Received shutdown signal, test time was about 2.000000 seconds 00:28:21.032 00:28:21.032 Latency(us) 00:28:21.032 [2024-12-09T09:38:53.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:21.032 [2024-12-09T09:38:53.473Z] =================================================================================================================== 00:28:21.032 [2024-12-09T09:38:53.473Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:21.032 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2648079 00:28:21.032 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2646701 00:28:21.032 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2646701 ']' 00:28:21.032 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2646701 00:28:21.289 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:21.289 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:21.289 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2646701 00:28:21.289 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:21.289 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:21.289 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2646701' 00:28:21.289 killing process with pid 2646701 00:28:21.289 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2646701 00:28:21.289 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2646701 00:28:21.547 00:28:21.547 real 0m15.838s 00:28:21.547 user 0m30.997s 00:28:21.547 sys 0m4.414s 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:21.547 ************************************ 00:28:21.547 END TEST nvmf_digest_clean 00:28:21.547 ************************************ 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:21.547 ************************************ 00:28:21.547 START TEST nvmf_digest_error 00:28:21.547 ************************************ 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2648633 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2648633 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2648633 ']' 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.547 10:38:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.547 [2024-12-09 10:38:53.860476] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:21.547 [2024-12-09 10:38:53.860583] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.547 [2024-12-09 10:38:53.933877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.803 [2024-12-09 10:38:53.993082] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.803 [2024-12-09 10:38:53.993173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.803 [2024-12-09 10:38:53.993196] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.803 [2024-12-09 10:38:53.993213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.803 [2024-12-09 10:38:53.993241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.803 [2024-12-09 10:38:53.993849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.803 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.803 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:21.803 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:21.803 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:21.804 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.804 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.804 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:21.804 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.804 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.804 [2024-12-09 10:38:54.118618] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:21.804 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.804 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:21.804 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:21.804 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.804 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:21.804 null0 00:28:21.804 [2024-12-09 10:38:54.241204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.079 [2024-12-09 10:38:54.265463] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2648664 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2648664 /var/tmp/bperf.sock 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2648664 ']' 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.079 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.079 [2024-12-09 10:38:54.319859] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:22.079 [2024-12-09 10:38:54.319943] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2648664 ] 00:28:22.079 [2024-12-09 10:38:54.393087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.079 [2024-12-09 10:38:54.454147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:22.336 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.336 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:22.336 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.336 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:22.593 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:22.593 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.593 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.593 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.593 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.593 10:38:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:22.849 nvme0n1 00:28:22.850 10:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:22.850 10:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.850 10:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:22.850 10:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.850 10:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:22.850 10:38:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:23.106 Running I/O for 2 seconds... 00:28:23.106 [2024-12-09 10:38:55.405840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.106 [2024-12-09 10:38:55.405891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.106 [2024-12-09 10:38:55.405912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.106 [2024-12-09 10:38:55.417241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.106 [2024-12-09 10:38:55.417272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.106 [2024-12-09 10:38:55.417289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.106 [2024-12-09 10:38:55.431935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.106 [2024-12-09 10:38:55.431966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:7538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.106 [2024-12-09 10:38:55.431983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.106 [2024-12-09 10:38:55.444670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.106 [2024-12-09 10:38:55.444702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.106 [2024-12-09 10:38:55.444734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.106 [2024-12-09 10:38:55.457766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.106 [2024-12-09 10:38:55.457795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.106 [2024-12-09 10:38:55.457811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.106 [2024-12-09 10:38:55.471784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.106 [2024-12-09 10:38:55.471813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.106 [2024-12-09 10:38:55.471829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.106 [2024-12-09 10:38:55.485459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.106 [2024-12-09 10:38:55.485490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.106 [2024-12-09 10:38:55.485507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.106 [2024-12-09 10:38:55.498535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.106 [2024-12-09 10:38:55.498567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.106 [2024-12-09 10:38:55.498585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.106 [2024-12-09 10:38:55.511869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.106 [2024-12-09 10:38:55.511900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.106 [2024-12-09 10:38:55.511917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.106 [2024-12-09 10:38:55.522796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.106 [2024-12-09 10:38:55.522842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.106 [2024-12-09 10:38:55.522865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.106 [2024-12-09 10:38:55.537020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.107 [2024-12-09 10:38:55.537052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.107 [2024-12-09 10:38:55.537070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.363 [2024-12-09 10:38:55.549287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.363 [2024-12-09 10:38:55.549318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.363 [2024-12-09 10:38:55.549336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.363 [2024-12-09 10:38:55.562015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.363 [2024-12-09 10:38:55.562051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.363 [2024-12-09 10:38:55.562067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.574580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.574609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.574624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.590611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.590655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.590673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.603947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.603978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.603995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.618197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.618228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.618246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.631019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.631050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.631081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.643687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.643736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:23195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.643754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.656265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.656296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.656314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.668860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.668894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.668911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.683276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.683307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.683324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.694635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.694663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.694677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.709233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.709262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.709277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.723371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.723403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.723420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.739362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.739393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.739410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.755428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.755460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.755478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.771361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.771393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.771411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.783080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.783110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.783126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.364 [2024-12-09 10:38:55.796668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.364 [2024-12-09 10:38:55.796699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.364 [2024-12-09 10:38:55.796722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.621 [2024-12-09 10:38:55.808153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.621 [2024-12-09 10:38:55.808182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.621 [2024-12-09 10:38:55.808198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.621 [2024-12-09 10:38:55.823537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.621 [2024-12-09 10:38:55.823565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.621 [2024-12-09 10:38:55.823580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.621 [2024-12-09 10:38:55.837440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.621 [2024-12-09 10:38:55.837471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.621 [2024-12-09 10:38:55.837488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.621 [2024-12-09 10:38:55.854611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:55.854639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:55.854655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:55.869174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:55.869205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:55.869222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:55.884879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:55.884911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:55.884929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:55.895850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:55.895879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:55.895894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:55.912327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:55.912357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:55.912374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:55.926358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:55.926390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:55.926407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:55.940695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:55.940722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:55.940737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:55.954318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:55.954349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:55.954367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:55.969721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:55.969751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:55.969768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:55.980741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:55.980768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:55.980783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:55.993859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:55.993890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:55.993908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:56.007556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:56.007584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:56.007598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:56.020027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:56.020054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:56.020069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:56.033673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:56.033715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:56.033737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:56.047127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:56.047179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:56.047196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.622 [2024-12-09 10:38:56.061136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.622 [2024-12-09 10:38:56.061176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.622 [2024-12-09 10:38:56.061193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.073965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.073996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.074013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.086355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.086386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.086403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.100762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.100791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.100807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.112100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.112148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.112166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.126117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.126165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.126181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.143037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.143065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.143080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.158056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.158091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.158107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.174203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.174232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.174247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.189975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.190003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.190019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.204625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.204655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.204687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.217813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.217841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.217856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.233847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.233875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.233890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.248729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.248760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.248778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.261913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.261943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.261960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.279199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.279227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.279242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.291206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.291237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.291255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.303912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.303957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.303974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:23.880 [2024-12-09 10:38:56.318797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:23.880 [2024-12-09 10:38:56.318826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:23.880 [2024-12-09 10:38:56.318858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.334610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.334638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.334653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.347238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.347267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.347283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.359428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.359459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.359475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.372923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.372966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.372982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 18344.00 IOPS, 71.66 MiB/s [2024-12-09T09:38:56.579Z] [2024-12-09 10:38:56.387847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.387878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.387909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.398728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.398756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.398777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.415230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.415259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.415275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.429228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.429273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.429290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.443192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.443223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.443240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.454856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.454899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.454916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.468066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.468094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.468110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.484063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.484095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.484113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.499096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.138 [2024-12-09 10:38:56.499124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.138 [2024-12-09 10:38:56.499146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.138 [2024-12-09 10:38:56.515242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.139 [2024-12-09 10:38:56.515274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.139 [2024-12-09 10:38:56.515291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.139 [2024-12-09 10:38:56.529636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.139 [2024-12-09 10:38:56.529688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.139 [2024-12-09 10:38:56.529707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.139 [2024-12-09 10:38:56.543588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.139 [2024-12-09 10:38:56.543630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.139 [2024-12-09 10:38:56.543645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.139 [2024-12-09 10:38:56.556583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.139 [2024-12-09 10:38:56.556614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.139 [2024-12-09 10:38:56.556631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.139 [2024-12-09 10:38:56.568650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.139 [2024-12-09 10:38:56.568681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.139 [2024-12-09 10:38:56.568713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.396 [2024-12-09 10:38:56.580161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.396 [2024-12-09 10:38:56.580190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.580206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.595387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.595416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.595431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.608966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.608994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.609009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.621323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.621353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.621369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.634272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.634304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.634336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.645591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.645619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.645633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.660027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.660055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.660069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.674026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.674057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.674088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.688871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.688903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.688919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.702048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.702080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.702097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.713089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.713116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.713154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.729672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.729700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.729714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.745129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.745164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.745180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.761321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.761358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.761376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.774847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.774890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.774906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.786587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.786614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.786630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.800830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.800858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.800873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.817547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.817575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.817590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.397 [2024-12-09 10:38:56.831006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.397 [2024-12-09 10:38:56.831049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.397 [2024-12-09 10:38:56.831064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.844581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.844627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.844643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.858482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.858527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.858543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.872265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.872296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.872312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.885153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.885184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.885200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.898439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.898469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:20824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.898500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.912200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.912229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.912246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.925888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.925916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.925931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.940644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.940675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.940692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.951839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.951867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.951882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.967103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.967156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.967174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.981481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.981525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.981542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:56.992601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:56.992632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:56.992655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:57.008309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:57.008337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:57.008353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:57.021438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:57.021467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:57.021496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:57.032477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.655 [2024-12-09 10:38:57.032522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.655 [2024-12-09 10:38:57.032538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.655 [2024-12-09 10:38:57.046009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.656 [2024-12-09 10:38:57.046037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.656 [2024-12-09 10:38:57.046052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.656 [2024-12-09 10:38:57.061729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.656 [2024-12-09 10:38:57.061757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.656 [2024-12-09 10:38:57.061773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.656 [2024-12-09 10:38:57.076154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.656 [2024-12-09 10:38:57.076182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.656 [2024-12-09 10:38:57.076197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.656 [2024-12-09 10:38:57.092392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.656 [2024-12-09 10:38:57.092424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.656 [2024-12-09 10:38:57.092441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.913 [2024-12-09 10:38:57.107093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.913 [2024-12-09 10:38:57.107136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.913 [2024-12-09 10:38:57.107162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.913 [2024-12-09 10:38:57.120414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.913 [2024-12-09 10:38:57.120451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.913 [2024-12-09 10:38:57.120469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.134246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.134278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.134296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.145572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.145600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.145615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.159957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.160005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.160023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.175178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.175209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.175241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.187304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.187333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.187348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.200967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.200995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.201010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.214702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.214732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.214749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.229301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.229332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.229349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.240255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.240298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.240314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.257254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.257285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.257301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.270307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.270338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.270355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.281848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.281875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.281890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.297082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.297109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.297147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.312260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.312298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.312314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.326531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.326562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.326580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.338146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.338174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.338190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:24.914 [2024-12-09 10:38:57.351658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:24.914 [2024-12-09 10:38:57.351688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.914 [2024-12-09 10:38:57.351725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.172 [2024-12-09 10:38:57.366680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:25.172 [2024-12-09 10:38:57.366724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.172 [2024-12-09 10:38:57.366740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.172 [2024-12-09 10:38:57.381345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:25.172 [2024-12-09 10:38:57.381376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.172 [2024-12-09 10:38:57.381392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.172 18438.00 IOPS, 72.02 MiB/s [2024-12-09T09:38:57.613Z] [2024-12-09 10:38:57.392866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa26420) 00:28:25.172 [2024-12-09 10:38:57.392898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:25.172 [2024-12-09 10:38:57.392931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:25.172 00:28:25.172 Latency(us) 00:28:25.172 [2024-12-09T09:38:57.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.172 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:25.172 nvme0n1 : 2.01 18448.40 72.06 0.00 0.00 6927.40 3568.07 21456.97 00:28:25.172 [2024-12-09T09:38:57.613Z] =================================================================================================================== 00:28:25.172 [2024-12-09T09:38:57.613Z] Total : 18448.40 72.06 0.00 0.00 6927.40 3568.07 21456.97 00:28:25.172 { 00:28:25.172 "results": [ 00:28:25.172 { 00:28:25.172 "job": "nvme0n1", 00:28:25.172 "core_mask": "0x2", 00:28:25.172 "workload": "randread", 00:28:25.172 "status": "finished", 00:28:25.172 "queue_depth": 128, 00:28:25.172 "io_size": 4096, 00:28:25.172 "runtime": 2.005811, 00:28:25.172 "iops": 18448.39817909065, 00:28:25.172 "mibps": 72.06405538707286, 00:28:25.172 "io_failed": 0, 00:28:25.172 "io_timeout": 0, 00:28:25.172 "avg_latency_us": 6927.395564523555, 00:28:25.172 "min_latency_us": 3568.071111111111, 00:28:25.172 "max_latency_us": 21456.971851851853 00:28:25.172 } 00:28:25.172 ], 00:28:25.172 "core_count": 1 00:28:25.172 } 00:28:25.172 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:25.172 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:25.172 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:25.172 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:25.172 | .driver_specific 00:28:25.172 | .nvme_error 00:28:25.172 | .status_code 00:28:25.172 | .command_transient_transport_error' 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2648664 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2648664 ']' 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2648664 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2648664 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2648664' 00:28:25.431 killing process with pid 2648664 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2648664 00:28:25.431 Received shutdown signal, test time was about 2.000000 seconds 00:28:25.431 00:28:25.431 Latency(us) 00:28:25.431 [2024-12-09T09:38:57.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.431 [2024-12-09T09:38:57.872Z] =================================================================================================================== 00:28:25.431 [2024-12-09T09:38:57.872Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:25.431 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2648664 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2649183 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2649183 /var/tmp/bperf.sock 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2649183 ']' 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:25.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:25.689 10:38:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:25.689 [2024-12-09 10:38:58.028032] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:25.689 [2024-12-09 10:38:58.028109] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2649183 ] 00:28:25.689 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:25.689 Zero copy mechanism will not be used. 00:28:25.689 [2024-12-09 10:38:58.091706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.966 [2024-12-09 10:38:58.146695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:25.966 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:25.966 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:25.966 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:25.966 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:26.224 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:26.224 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.224 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.224 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.224 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.224 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:26.482 nvme0n1 00:28:26.482 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:26.482 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.482 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:26.741 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.741 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:26.741 10:38:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:26.741 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.741 Zero copy mechanism will not be used. 00:28:26.741 Running I/O for 2 seconds... 00:28:26.741 [2024-12-09 10:38:59.031386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.741 [2024-12-09 10:38:59.031433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.741 [2024-12-09 10:38:59.031453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.741 [2024-12-09 10:38:59.037082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.741 [2024-12-09 10:38:59.037118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.741 [2024-12-09 10:38:59.037137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.741 [2024-12-09 10:38:59.041970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.741 [2024-12-09 10:38:59.042016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.741 [2024-12-09 10:38:59.042034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.741 [2024-12-09 10:38:59.046847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.741 [2024-12-09 10:38:59.046894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.741 [2024-12-09 10:38:59.046912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.741 [2024-12-09 10:38:59.051442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.741 [2024-12-09 10:38:59.051474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.741 [2024-12-09 10:38:59.051506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.741 [2024-12-09 10:38:59.056168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.056214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.056231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.060896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.060927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.060960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.065465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.065496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.065513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.070200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.070231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.070249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.074910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.074943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.074961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.079420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.079458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.079491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.084187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.084218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.084236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.089054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.089085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.089108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.093634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.093666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.093684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.098723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.098769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.098786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.103431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.103462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.103495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.108127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.108167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.108200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.112820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.112852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.112883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.117747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.117778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.117795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.123302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.123333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.123351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.130904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.130936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.130954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.137494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.137532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.137550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.144206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.144254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.144272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.150867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.150899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.150917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.157272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.157304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.157321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.162889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.162920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.162939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.168355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.168387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.168405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.174661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.174693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.174712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:26.742 [2024-12-09 10:38:59.180984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:26.742 [2024-12-09 10:38:59.181030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:26.742 [2024-12-09 10:38:59.181048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.187727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.187773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.187791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.193527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.193558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.193576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.199949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.199981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.199998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.205847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.205878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.205896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.209338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.209368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.209384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.215349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.215380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.215398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.220600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.220647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.220664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.225362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.225395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.225412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.230840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.230872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.230889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.236186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.236217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.236242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.241424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.241470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.241488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.246294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.246325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.246343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.250938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.250983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.250999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.255679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.255709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.255738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.260233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.260263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.260295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.002 [2024-12-09 10:38:59.264918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.002 [2024-12-09 10:38:59.264948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.002 [2024-12-09 10:38:59.264965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.269479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.269509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.269525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.274181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.274212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.274229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.278705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.278736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.278752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.283384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.283415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.283432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.288097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.288128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.288155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.293604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.293651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.293669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.299621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.299653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.299685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.305947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.305979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.305997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.311737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.311770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.311788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.317440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.317471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.317489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.323084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.323115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.323164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.328807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.328838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.328855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.334260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.334292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.334310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.339038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.339070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.339088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.342367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.342397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.342421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.347443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.347474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.347492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.352944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.352974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.352991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.357940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.357971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.357989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.362659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.362690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.362707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.367354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.367394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.367413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.372884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.372914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.372930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.377614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.377645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.377662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.382205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.382236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.382254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.386867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.386898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.386914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.391549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.003 [2024-12-09 10:38:59.391593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.003 [2024-12-09 10:38:59.391610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.003 [2024-12-09 10:38:59.396264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.004 [2024-12-09 10:38:59.396294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.004 [2024-12-09 10:38:59.396311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.004 [2024-12-09 10:38:59.400844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.004 [2024-12-09 10:38:59.400875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.004 [2024-12-09 10:38:59.400893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.004 [2024-12-09 10:38:59.406272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.004 [2024-12-09 10:38:59.406301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.004 [2024-12-09 10:38:59.406317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.004 [2024-12-09 10:38:59.411115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.004 [2024-12-09 10:38:59.411166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.004 [2024-12-09 10:38:59.411184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.004 [2024-12-09 10:38:59.416485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.004 [2024-12-09 10:38:59.416530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.004 [2024-12-09 10:38:59.416547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.004 [2024-12-09 10:38:59.422064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.004 [2024-12-09 10:38:59.422111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.004 [2024-12-09 10:38:59.422128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.004 [2024-12-09 10:38:59.428065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.004 [2024-12-09 10:38:59.428097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.004 [2024-12-09 10:38:59.428115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.004 [2024-12-09 10:38:59.433738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.004 [2024-12-09 10:38:59.433782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.004 [2024-12-09 10:38:59.433799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.004 [2024-12-09 10:38:59.439554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.004 [2024-12-09 10:38:59.439583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.004 [2024-12-09 10:38:59.439599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.263 [2024-12-09 10:38:59.445558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.263 [2024-12-09 10:38:59.445586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.263 [2024-12-09 10:38:59.445604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.263 [2024-12-09 10:38:59.451810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.263 [2024-12-09 10:38:59.451857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.263 [2024-12-09 10:38:59.451874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.263 [2024-12-09 10:38:59.458107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.263 [2024-12-09 10:38:59.458145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.263 [2024-12-09 10:38:59.458174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.263 [2024-12-09 10:38:59.465055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.263 [2024-12-09 10:38:59.465087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.263 [2024-12-09 10:38:59.465104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.263 [2024-12-09 10:38:59.471717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.471750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.471783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.478114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.478155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.478175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.483730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.483761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.483778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.489230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.489261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.489279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.494222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.494253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.494284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.498763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.498794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.498811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.503348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.503379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.503396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.507893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.507932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.507950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.513228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.513275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.513293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.518189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.518232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.518249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.523232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.523277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.523294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.526635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.526667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.526685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.530786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.530817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.530835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.535709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.535754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.535770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.540676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.540708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.540725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.545676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.545708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.545734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.550654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.550685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.550702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.555922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.555952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.555968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.560908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.560938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.560955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.566112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.566164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.566182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.571523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.571554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.571573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.578244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.578276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.578294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.585900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.585932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.585951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.593746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.593778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.593795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.601483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.601524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.601558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.609293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.609325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.609343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.264 [2024-12-09 10:38:59.615053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.264 [2024-12-09 10:38:59.615084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.264 [2024-12-09 10:38:59.615102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.620762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.620793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.620825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.626400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.626432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.626451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.631659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.631690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.631707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.637322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.637354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.637370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.643815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.643862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.643879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.651078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.651110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.651127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.658780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.658812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.658830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.666450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.666482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.666501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.674384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.674416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.674434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.682147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.682178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.682196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.689676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.689723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.689740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.265 [2024-12-09 10:38:59.697290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.265 [2024-12-09 10:38:59.697321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.265 [2024-12-09 10:38:59.697340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.704981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.705013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.705031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.712588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.712621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.712639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.717504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.717538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.717564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.723832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.723879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.723896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.731550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.731598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.731615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.739244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.739276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.739293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.746779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.746810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.746842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.754481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.754526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.754544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.762079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.762125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.762150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.768293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.768339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.768357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.774451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.774483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.774500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.780523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.780565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.780584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.786880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.786910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.786926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.792689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.792720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.792739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.797657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.797688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.797704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.802435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.802465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.802481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.807210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.807268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.807285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.812677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.812724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.812741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.817790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.524 [2024-12-09 10:38:59.817822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.524 [2024-12-09 10:38:59.817839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.524 [2024-12-09 10:38:59.822629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.822658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.822675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.827457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.827490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.827507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.832233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.832265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.832284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.835804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.835839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.835857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.839806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.839838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.839856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.845573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.845620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.845638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.851606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.851638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.851656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.858190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.858221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.858255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.864623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.864653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.864669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.871033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.871077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.871095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.876801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.876834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.876852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.882347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.882397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.882414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.888221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.888268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.888285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.893986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.894034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.894052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.899703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.899735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.899768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.905411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.905444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.905478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.911372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.911403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.911422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.917557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.917590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.917608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.923279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.923311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.923328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.928632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.928663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.928680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.933522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.933553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.933570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.938733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.938764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.938783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.943825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.943856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.943874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.946710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.946738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.946753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.951952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.951984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.952000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.958722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.958769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.958787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.525 [2024-12-09 10:38:59.963768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.525 [2024-12-09 10:38:59.963800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.525 [2024-12-09 10:38:59.963827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.784 [2024-12-09 10:38:59.968989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.784 [2024-12-09 10:38:59.969022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.784 [2024-12-09 10:38:59.969040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.784 [2024-12-09 10:38:59.974794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.784 [2024-12-09 10:38:59.974840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.784 [2024-12-09 10:38:59.974860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.784 [2024-12-09 10:38:59.980764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.784 [2024-12-09 10:38:59.980795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.784 [2024-12-09 10:38:59.980812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.784 [2024-12-09 10:38:59.986857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.784 [2024-12-09 10:38:59.986888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.784 [2024-12-09 10:38:59.986904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.784 [2024-12-09 10:38:59.992760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.784 [2024-12-09 10:38:59.992790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.784 [2024-12-09 10:38:59.992807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.784 [2024-12-09 10:38:59.998851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.784 [2024-12-09 10:38:59.998883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.784 [2024-12-09 10:38:59.998901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.784 [2024-12-09 10:39:00.004652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.784 [2024-12-09 10:39:00.004696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.784 [2024-12-09 10:39:00.004716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.784 [2024-12-09 10:39:00.010173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.784 [2024-12-09 10:39:00.010220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.784 [2024-12-09 10:39:00.010240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.784 [2024-12-09 10:39:00.015510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.784 [2024-12-09 10:39:00.015557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.784 [2024-12-09 10:39:00.015576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.784 [2024-12-09 10:39:00.021881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.784 [2024-12-09 10:39:00.021930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.784 [2024-12-09 10:39:00.021961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.028238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.028272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.028301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.785 5520.00 IOPS, 690.00 MiB/s [2024-12-09T09:39:00.226Z] [2024-12-09 10:39:00.036568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.036602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.036631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.043017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.043053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.043083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.049961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.049995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.050024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.056226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.056261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.056290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.062383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.062417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.062450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.069627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.069661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.069691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.074984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.075017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.075051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.079851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.079893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.079919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.084490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.084525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.084552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.089245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.089279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.089307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.094990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.095024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.095051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.102509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.102542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.102570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.109068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.109101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.109150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.114896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.114929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.114958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.120763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.120795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.120834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.126421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.126454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.126481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.133649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.133683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.133711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.140739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.140771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.140799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.145612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.145645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.145673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.151266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.151299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.151326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.155758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.155790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.155816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.160298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.160331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.160358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.164955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.165005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.165032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.170323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.170371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.170399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.174449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.174481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.174508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.785 [2024-12-09 10:39:00.179193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.785 [2024-12-09 10:39:00.179226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.785 [2024-12-09 10:39:00.179253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.786 [2024-12-09 10:39:00.184600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.786 [2024-12-09 10:39:00.184633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.786 [2024-12-09 10:39:00.184660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.786 [2024-12-09 10:39:00.189354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.786 [2024-12-09 10:39:00.189385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.786 [2024-12-09 10:39:00.189412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.786 [2024-12-09 10:39:00.193862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.786 [2024-12-09 10:39:00.193893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.786 [2024-12-09 10:39:00.193920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.786 [2024-12-09 10:39:00.198366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.786 [2024-12-09 10:39:00.198398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.786 [2024-12-09 10:39:00.198426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.786 [2024-12-09 10:39:00.202775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.786 [2024-12-09 10:39:00.202807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.786 [2024-12-09 10:39:00.202835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:27.786 [2024-12-09 10:39:00.207553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.786 [2024-12-09 10:39:00.207585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.786 [2024-12-09 10:39:00.207621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:27.786 [2024-12-09 10:39:00.212208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.786 [2024-12-09 10:39:00.212240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.786 [2024-12-09 10:39:00.212268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:27.786 [2024-12-09 10:39:00.216783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.786 [2024-12-09 10:39:00.216815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.786 [2024-12-09 10:39:00.216843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:27.786 [2024-12-09 10:39:00.221424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:27.786 [2024-12-09 10:39:00.221457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:27.786 [2024-12-09 10:39:00.221485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.226762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.226809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.226836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.232010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.232058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.232086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.237112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.237153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.237183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.242940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.242973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.243013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.249226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.249258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.249285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.255019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.255059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.255087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.261011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.261046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.261074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.267057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.267106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.267133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.273254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.273288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.273331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.280324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.280357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.280385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.285846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.285880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.285921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.291536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.291570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.291598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.297107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.297149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.297180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.303061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.303096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.303124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.308343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.308378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.308405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.314426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.314460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.314502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.320504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.320538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.320566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.326603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.326637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.326680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.333355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.333389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.045 [2024-12-09 10:39:00.333417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.045 [2024-12-09 10:39:00.339495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.045 [2024-12-09 10:39:00.339529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.339580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.345206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.345240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.345270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.351470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.351519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.351546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.357454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.357489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.357526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.361432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.361464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.361492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.366982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.367014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.367041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.373072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.373105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.373132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.379276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.379324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.379353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.385250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.385285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.385314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.390632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.390665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.390693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.395989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.396022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.396049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.400594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.400629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.400656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.405550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.405596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.405626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.409844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.409877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.409906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.414414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.414458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.414486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.419178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.419211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.419239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.423886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.423919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.423945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.428626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.428673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.428701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.433338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.433371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.433398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.438059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.438093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.438121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.442860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.442893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.442922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.448414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.448452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.448479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.455527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.455561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.455588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.462862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.046 [2024-12-09 10:39:00.462896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.046 [2024-12-09 10:39:00.462925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.046 [2024-12-09 10:39:00.468456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.047 [2024-12-09 10:39:00.468490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.047 [2024-12-09 10:39:00.468519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.047 [2024-12-09 10:39:00.474089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.047 [2024-12-09 10:39:00.474124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.047 [2024-12-09 10:39:00.474164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.047 [2024-12-09 10:39:00.479962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.047 [2024-12-09 10:39:00.479995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.047 [2024-12-09 10:39:00.480024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.306 [2024-12-09 10:39:00.486492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.306 [2024-12-09 10:39:00.486527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.306 [2024-12-09 10:39:00.486569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.306 [2024-12-09 10:39:00.492629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.306 [2024-12-09 10:39:00.492664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.306 [2024-12-09 10:39:00.492710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.306 [2024-12-09 10:39:00.498147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.306 [2024-12-09 10:39:00.498205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.306 [2024-12-09 10:39:00.498270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.306 [2024-12-09 10:39:00.504351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.306 [2024-12-09 10:39:00.504390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.306 [2024-12-09 10:39:00.504431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.306 [2024-12-09 10:39:00.511285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.306 [2024-12-09 10:39:00.511320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.306 [2024-12-09 10:39:00.511349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.306 [2024-12-09 10:39:00.518040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.306 [2024-12-09 10:39:00.518075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.306 [2024-12-09 10:39:00.518103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.306 [2024-12-09 10:39:00.523607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.306 [2024-12-09 10:39:00.523641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.306 [2024-12-09 10:39:00.523668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.306 [2024-12-09 10:39:00.529285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.306 [2024-12-09 10:39:00.529319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.306 [2024-12-09 10:39:00.529347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.306 [2024-12-09 10:39:00.535132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.306 [2024-12-09 10:39:00.535209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.306 [2024-12-09 10:39:00.535240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.306 [2024-12-09 10:39:00.540493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.540527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.540555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.546534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.546569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.546598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.552689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.552724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.552754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.558959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.558993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.559023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.565382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.565416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.565445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.571450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.571485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.571513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.575562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.575596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.575624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.579936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.579971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.579999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.584931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.584965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.584994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.590016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.590050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.590078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.593180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.593214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.593255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.598394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.598442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.598470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.604916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.604950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.604980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.612681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.612732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.612761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.619818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.619853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.619881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.625957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.625991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.626020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.631255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.631303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.631331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.636051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.636085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.636113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.640815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.640849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.640876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.645557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.645600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.645629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.650319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.650353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.650381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.654943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.654977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.655005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.659665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.659715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.659741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.664989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.665023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.665050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.670121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.670178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.670207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.674740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.307 [2024-12-09 10:39:00.674775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.307 [2024-12-09 10:39:00.674802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.307 [2024-12-09 10:39:00.679326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.679360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.679388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.308 [2024-12-09 10:39:00.684637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.684670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.684698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.308 [2024-12-09 10:39:00.691668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.691716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.691744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.308 [2024-12-09 10:39:00.699040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.699075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.699104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.308 [2024-12-09 10:39:00.705950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.705998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.706038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.308 [2024-12-09 10:39:00.712882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.712916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.712957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.308 [2024-12-09 10:39:00.719167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.719202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.719230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.308 [2024-12-09 10:39:00.724341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.724376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.724404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.308 [2024-12-09 10:39:00.730079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.730115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.730151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.308 [2024-12-09 10:39:00.736768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.736819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.736848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.308 [2024-12-09 10:39:00.741524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.741559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.741600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.308 [2024-12-09 10:39:00.746186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.308 [2024-12-09 10:39:00.746220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.308 [2024-12-09 10:39:00.746249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.750954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.750988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.751016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.755957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.756005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.756033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.761822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.761857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.761884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.768028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.768062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.768091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.774767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.774803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.774831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.781434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.781469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.781498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.789517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.789567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.789594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.797612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.797656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.797686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.805746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.805781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.805810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.813626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.813660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.813703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.821273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.821307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.821337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.829608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.829642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.829670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.837809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.837844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.837872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.846534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.592 [2024-12-09 10:39:00.846568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.592 [2024-12-09 10:39:00.846596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.592 [2024-12-09 10:39:00.854876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.854910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.854938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.862337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.862373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.862401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.870862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.870908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.870937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.879391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.879436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.879465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.887474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.887508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.887536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.896202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.896237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.896280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.903013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.903047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.903075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.908153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.908196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.908224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.913083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.913117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.913154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.917999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.918033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.918061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.922847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.922890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.922920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.928748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.928782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.928811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.934164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.934198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.934227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.939548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.939582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.939611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.945252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.945285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.945315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.951337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.951372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.951399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.957913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.957947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.957976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.964752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.964786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.964814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.968017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.968051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.968078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.973114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.973158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.973187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.977878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.977910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.977936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.982700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.982734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.982761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.987547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.987580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.987607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.992364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.992398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.992441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:00.997175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:00.997221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:00.997247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:01.001983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:01.002016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:01.002043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.593 [2024-12-09 10:39:01.006955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.593 [2024-12-09 10:39:01.007001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.593 [2024-12-09 10:39:01.007028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.594 [2024-12-09 10:39:01.012973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.594 [2024-12-09 10:39:01.013005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.594 [2024-12-09 10:39:01.013044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:28.594 [2024-12-09 10:39:01.019054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.594 [2024-12-09 10:39:01.019102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.594 [2024-12-09 10:39:01.019130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:28.594 [2024-12-09 10:39:01.025809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.594 [2024-12-09 10:39:01.025844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.594 [2024-12-09 10:39:01.025872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:28.594 [2024-12-09 10:39:01.030506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x12169d0) 00:28:28.594 [2024-12-09 10:39:01.030541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.594 [2024-12-09 10:39:01.030570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:28.852 5446.00 IOPS, 680.75 MiB/s 00:28:28.852 Latency(us) 00:28:28.852 [2024-12-09T09:39:01.293Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.852 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:28.852 nvme0n1 : 2.00 5446.59 680.82 0.00 0.00 2933.49 719.08 8835.22 00:28:28.852 [2024-12-09T09:39:01.293Z] =================================================================================================================== 00:28:28.852 [2024-12-09T09:39:01.293Z] Total : 5446.59 680.82 0.00 0.00 2933.49 719.08 8835.22 00:28:28.852 { 00:28:28.852 "results": [ 00:28:28.852 { 00:28:28.852 "job": "nvme0n1", 00:28:28.852 "core_mask": "0x2", 00:28:28.852 "workload": "randread", 00:28:28.852 "status": "finished", 00:28:28.852 "queue_depth": 16, 00:28:28.852 "io_size": 131072, 00:28:28.852 "runtime": 2.002722, 00:28:28.852 "iops": 5446.587194827839, 00:28:28.852 "mibps": 680.8233993534799, 00:28:28.852 "io_failed": 0, 00:28:28.852 "io_timeout": 0, 00:28:28.852 "avg_latency_us": 2933.4862296106153, 00:28:28.852 "min_latency_us": 719.0755555555555, 00:28:28.852 "max_latency_us": 8835.223703703703 00:28:28.852 } 00:28:28.852 ], 00:28:28.852 "core_count": 1 00:28:28.852 } 00:28:28.852 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:28.852 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:28.852 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:28.852 | .driver_specific 00:28:28.852 | .nvme_error 00:28:28.852 | .status_code 00:28:28.852 | .command_transient_transport_error' 00:28:28.852 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 352 > 0 )) 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2649183 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2649183 ']' 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2649183 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2649183 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2649183' 00:28:29.111 killing process with pid 2649183 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2649183 00:28:29.111 Received shutdown signal, test time was about 2.000000 seconds 00:28:29.111 00:28:29.111 Latency(us) 00:28:29.111 [2024-12-09T09:39:01.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.111 [2024-12-09T09:39:01.552Z] =================================================================================================================== 00:28:29.111 [2024-12-09T09:39:01.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:29.111 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2649183 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2649679 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2649679 /var/tmp/bperf.sock 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2649679 ']' 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:29.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:29.370 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.370 [2024-12-09 10:39:01.693518] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:29.370 [2024-12-09 10:39:01.693612] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2649679 ] 00:28:29.370 [2024-12-09 10:39:01.759685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.629 [2024-12-09 10:39:01.819345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:29.629 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:29.629 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:29.629 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:29.629 10:39:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:29.887 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:29.887 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.887 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:29.887 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.887 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:29.887 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:30.454 nvme0n1 00:28:30.454 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:30.454 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.454 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.454 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.454 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:30.454 10:39:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:30.454 Running I/O for 2 seconds... 00:28:30.454 [2024-12-09 10:39:02.880302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.454 [2024-12-09 10:39:02.880648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.454 [2024-12-09 10:39:02.880689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.454 [2024-12-09 10:39:02.894849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.454 [2024-12-09 10:39:02.895066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.454 [2024-12-09 10:39:02.895098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:02.909255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:02.909496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:02.909540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:02.924000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:02.924241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:02.924287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:02.938578] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:02.938884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:02.938941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:02.952972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:02.953223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:02.953255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:02.967294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:02.967599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:02.967629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:02.981619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:02.981866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:02.981911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:02.995838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:02.996086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:02.996130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:03.010324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:03.010585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:03.010630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:03.024605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:03.024890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:03.024935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:03.038941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:03.039194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:23592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:03.039225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:03.053238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:03.053550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:03.053580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:03.067592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:03.067839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:03.067882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:03.081929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:03.082251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:03.082297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:03.096304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:03.096555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:03.096599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:03.110594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:03.110842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:03.110871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:03.124811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:03.125059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:03.125102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:03.139137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:03.139473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:03.139510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.713 [2024-12-09 10:39:03.153039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.713 [2024-12-09 10:39:03.153308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.713 [2024-12-09 10:39:03.153339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.972 [2024-12-09 10:39:03.167288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.972 [2024-12-09 10:39:03.167583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.972 [2024-12-09 10:39:03.167612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.972 [2024-12-09 10:39:03.181534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.972 [2024-12-09 10:39:03.181776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.972 [2024-12-09 10:39:03.181818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.972 [2024-12-09 10:39:03.195783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.972 [2024-12-09 10:39:03.196079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.972 [2024-12-09 10:39:03.196107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.972 [2024-12-09 10:39:03.210177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.972 [2024-12-09 10:39:03.210385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.972 [2024-12-09 10:39:03.210427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.972 [2024-12-09 10:39:03.224284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.972 [2024-12-09 10:39:03.224576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.972 [2024-12-09 10:39:03.224619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.972 [2024-12-09 10:39:03.238507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.972 [2024-12-09 10:39:03.238768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.972 [2024-12-09 10:39:03.238812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.972 [2024-12-09 10:39:03.252871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.972 [2024-12-09 10:39:03.253177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.972 [2024-12-09 10:39:03.253207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.972 [2024-12-09 10:39:03.267038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.973 [2024-12-09 10:39:03.267357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.973 [2024-12-09 10:39:03.267403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.973 [2024-12-09 10:39:03.281270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.973 [2024-12-09 10:39:03.281499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.973 [2024-12-09 10:39:03.281544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.973 [2024-12-09 10:39:03.295511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.973 [2024-12-09 10:39:03.295801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.973 [2024-12-09 10:39:03.295845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.973 [2024-12-09 10:39:03.309792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.973 [2024-12-09 10:39:03.310072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.973 [2024-12-09 10:39:03.310121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.973 [2024-12-09 10:39:03.324041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.973 [2024-12-09 10:39:03.324380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.973 [2024-12-09 10:39:03.324412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.973 [2024-12-09 10:39:03.338402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.973 [2024-12-09 10:39:03.338653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.973 [2024-12-09 10:39:03.338697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.973 [2024-12-09 10:39:03.352750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.973 [2024-12-09 10:39:03.353002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.973 [2024-12-09 10:39:03.353047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.973 [2024-12-09 10:39:03.367150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.973 [2024-12-09 10:39:03.367363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.973 [2024-12-09 10:39:03.367393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.973 [2024-12-09 10:39:03.381413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.973 [2024-12-09 10:39:03.381724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.973 [2024-12-09 10:39:03.381769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.973 [2024-12-09 10:39:03.395810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.973 [2024-12-09 10:39:03.396074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.973 [2024-12-09 10:39:03.396104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:30.973 [2024-12-09 10:39:03.409890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:30.973 [2024-12-09 10:39:03.410179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:30.973 [2024-12-09 10:39:03.410212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.253 [2024-12-09 10:39:03.424290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.253 [2024-12-09 10:39:03.424592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.253 [2024-12-09 10:39:03.424636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.253 [2024-12-09 10:39:03.438754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.253 [2024-12-09 10:39:03.439101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.253 [2024-12-09 10:39:03.439132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.253 [2024-12-09 10:39:03.453115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.253 [2024-12-09 10:39:03.453466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.253 [2024-12-09 10:39:03.453498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.253 [2024-12-09 10:39:03.467258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.467528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.467559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.481560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.481815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.481865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.495806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.496024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.496068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.510167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.510501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.510547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.524588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.524828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.524872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.538976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.539269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.539300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.553398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.553654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.553698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.567590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.567892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.567937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.581865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.582114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.582167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.596094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.596413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.596458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.610345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.610680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.610710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.624674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.624980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.625025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.638960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.639222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.639253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.653255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.653519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.653564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.667331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.667626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.667670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.254 [2024-12-09 10:39:03.681618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.254 [2024-12-09 10:39:03.681867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.254 [2024-12-09 10:39:03.681915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.511 [2024-12-09 10:39:03.695700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.511 [2024-12-09 10:39:03.695948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.511 [2024-12-09 10:39:03.695991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.511 [2024-12-09 10:39:03.709941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.511 [2024-12-09 10:39:03.710231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.511 [2024-12-09 10:39:03.710275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.511 [2024-12-09 10:39:03.724214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.511 [2024-12-09 10:39:03.724475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.511 [2024-12-09 10:39:03.724520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.511 [2024-12-09 10:39:03.738533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.511 [2024-12-09 10:39:03.738771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.511 [2024-12-09 10:39:03.738814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.511 [2024-12-09 10:39:03.752910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.511 [2024-12-09 10:39:03.753196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.511 [2024-12-09 10:39:03.753240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.511 [2024-12-09 10:39:03.767296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.511 [2024-12-09 10:39:03.767587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.511 [2024-12-09 10:39:03.767631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.511 [2024-12-09 10:39:03.781674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.511 [2024-12-09 10:39:03.781973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.511 [2024-12-09 10:39:03.782002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.511 [2024-12-09 10:39:03.795992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.511 [2024-12-09 10:39:03.796272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.511 [2024-12-09 10:39:03.796318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.512 [2024-12-09 10:39:03.810314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.512 [2024-12-09 10:39:03.810625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.512 [2024-12-09 10:39:03.810655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.512 [2024-12-09 10:39:03.824618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.512 [2024-12-09 10:39:03.824857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.512 [2024-12-09 10:39:03.824900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.512 [2024-12-09 10:39:03.838881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.512 [2024-12-09 10:39:03.839189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.512 [2024-12-09 10:39:03.839220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.512 [2024-12-09 10:39:03.853266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.512 [2024-12-09 10:39:03.853563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:7011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.512 [2024-12-09 10:39:03.853608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.512 17701.00 IOPS, 69.14 MiB/s [2024-12-09T09:39:03.953Z] [2024-12-09 10:39:03.867491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.512 [2024-12-09 10:39:03.867731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.512 [2024-12-09 10:39:03.867774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.512 [2024-12-09 10:39:03.881839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.512 [2024-12-09 10:39:03.882119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.512 [2024-12-09 10:39:03.882162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.512 [2024-12-09 10:39:03.896262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.512 [2024-12-09 10:39:03.896534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.512 [2024-12-09 10:39:03.896580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.512 [2024-12-09 10:39:03.910587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.512 [2024-12-09 10:39:03.910905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.512 [2024-12-09 10:39:03.910936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.512 [2024-12-09 10:39:03.924622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.512 [2024-12-09 10:39:03.924909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.512 [2024-12-09 10:39:03.924945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.512 [2024-12-09 10:39:03.939054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.512 [2024-12-09 10:39:03.939304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.512 [2024-12-09 10:39:03.939349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:03.953065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:03.953285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:03.953316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:03.967247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:03.967562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:03.967606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:03.981442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:03.981765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:03.981794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:03.995675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:03.995979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:03.996023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.009936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.010184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.010213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.024337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.024649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.024694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.038537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.038784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.038828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.052760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.053045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.053090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.067053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.067381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.067412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.081480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.081734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.081777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.095742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.096007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.096050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.110080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.110405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.110451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.124507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.124752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.124797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.138817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.139092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.139135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.153039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.153283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:2452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.153314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.167368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.167687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.770 [2024-12-09 10:39:04.167718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.770 [2024-12-09 10:39:04.181313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.770 [2024-12-09 10:39:04.181560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.771 [2024-12-09 10:39:04.181605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.771 [2024-12-09 10:39:04.195568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.771 [2024-12-09 10:39:04.195800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.771 [2024-12-09 10:39:04.195845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:31.771 [2024-12-09 10:39:04.209656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:31.771 [2024-12-09 10:39:04.209854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:31.771 [2024-12-09 10:39:04.209883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.223770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.224027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.224056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.238055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.238386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.238417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.252231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.252527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.252571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.266418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.266686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.266732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.280738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.280991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.281036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.295107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.295410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.295464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.309359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.309667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.309697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.323422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.323686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.323715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.337641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.337920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.337950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.351936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.352223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.352254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.366079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.366381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.366411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.380296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.380610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.380639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.394691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.394942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.394987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.408911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.409172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.409202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.423172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.423418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.423448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.437104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.437366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:8849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.437397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.451365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.451685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.451715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.029 [2024-12-09 10:39:04.465597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.029 [2024-12-09 10:39:04.465853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.029 [2024-12-09 10:39:04.465898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.479560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.479820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.479865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.493651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.493904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.493948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.507851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.508116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.508170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.522166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.522467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.522496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.536579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.536830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.536875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.550662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.550957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.550987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.564942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.565229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.565260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.579212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.579473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.579518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.593441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.593701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.593746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.607777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.608084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.608114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.622086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.622406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.622436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.636258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.636575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.636606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.650460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.650775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.650805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.664843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.665081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.665134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.679311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.679625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.679656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.693375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.693700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.693730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.707598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.707868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.707913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.286 [2024-12-09 10:39:04.721980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.286 [2024-12-09 10:39:04.722285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.286 [2024-12-09 10:39:04.722316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.544 [2024-12-09 10:39:04.736082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.544 [2024-12-09 10:39:04.736394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.544 [2024-12-09 10:39:04.736424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.544 [2024-12-09 10:39:04.750412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.544 [2024-12-09 10:39:04.750677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.544 [2024-12-09 10:39:04.750723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.544 [2024-12-09 10:39:04.764757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.544 [2024-12-09 10:39:04.765064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.544 [2024-12-09 10:39:04.765095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.544 [2024-12-09 10:39:04.779163] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.544 [2024-12-09 10:39:04.779426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.544 [2024-12-09 10:39:04.779461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.544 [2024-12-09 10:39:04.793482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.544 [2024-12-09 10:39:04.793748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.544 [2024-12-09 10:39:04.793793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.544 [2024-12-09 10:39:04.807755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.544 [2024-12-09 10:39:04.808014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.544 [2024-12-09 10:39:04.808059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.544 [2024-12-09 10:39:04.822097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.544 [2024-12-09 10:39:04.822380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.544 [2024-12-09 10:39:04.822411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.544 [2024-12-09 10:39:04.836430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.544 [2024-12-09 10:39:04.836740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.544 [2024-12-09 10:39:04.836786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.544 [2024-12-09 10:39:04.850773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.544 [2024-12-09 10:39:04.851111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.544 [2024-12-09 10:39:04.851149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.544 [2024-12-09 10:39:04.865215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1350e30) with pdu=0x200016efe2e8 00:28:32.544 17819.50 IOPS, 69.61 MiB/s [2024-12-09T09:39:04.985Z] [2024-12-09 10:39:04.866220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:7358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:32.544 [2024-12-09 10:39:04.866253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:32.544 00:28:32.545 Latency(us) 00:28:32.545 [2024-12-09T09:39:04.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.545 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:32.545 nvme0n1 : 2.01 17820.16 69.61 0.00 0.00 7165.68 5509.88 16214.09 00:28:32.545 [2024-12-09T09:39:04.986Z] =================================================================================================================== 00:28:32.545 [2024-12-09T09:39:04.986Z] Total : 17820.16 69.61 0.00 0.00 7165.68 5509.88 16214.09 00:28:32.545 { 00:28:32.545 "results": [ 00:28:32.545 { 00:28:32.545 "job": "nvme0n1", 00:28:32.545 "core_mask": "0x2", 00:28:32.545 "workload": "randwrite", 00:28:32.545 "status": "finished", 00:28:32.545 "queue_depth": 128, 00:28:32.545 "io_size": 4096, 00:28:32.545 "runtime": 2.008905, 00:28:32.545 "iops": 17820.15575649421, 00:28:32.545 "mibps": 69.6099834238055, 00:28:32.545 "io_failed": 0, 00:28:32.545 "io_timeout": 0, 00:28:32.545 "avg_latency_us": 7165.682046736253, 00:28:32.545 "min_latency_us": 5509.878518518519, 00:28:32.545 "max_latency_us": 16214.091851851852 00:28:32.545 } 00:28:32.545 ], 00:28:32.545 "core_count": 1 00:28:32.545 } 00:28:32.545 10:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:32.545 10:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:32.545 10:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:32.545 | .driver_specific 00:28:32.545 | .nvme_error 00:28:32.545 | .status_code 00:28:32.545 | .command_transient_transport_error' 00:28:32.545 10:39:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 140 > 0 )) 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2649679 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2649679 ']' 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2649679 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2649679 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2649679' 00:28:32.802 killing process with pid 2649679 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2649679 00:28:32.802 Received shutdown signal, test time was about 2.000000 seconds 00:28:32.802 00:28:32.802 Latency(us) 00:28:32.802 [2024-12-09T09:39:05.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:32.802 [2024-12-09T09:39:05.243Z] =================================================================================================================== 00:28:32.802 [2024-12-09T09:39:05.243Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:32.802 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2649679 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2650112 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2650112 /var/tmp/bperf.sock 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2650112 ']' 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:33.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.059 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.316 [2024-12-09 10:39:05.519311] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:33.316 [2024-12-09 10:39:05.519391] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2650112 ] 00:28:33.316 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:33.316 Zero copy mechanism will not be used. 00:28:33.316 [2024-12-09 10:39:05.584628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.316 [2024-12-09 10:39:05.639359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.316 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.316 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:28:33.316 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.316 10:39:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:33.879 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:33.879 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.879 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.879 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.879 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.879 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.136 nvme0n1 00:28:34.136 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:34.136 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.136 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:34.136 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.136 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:34.136 10:39:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.394 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:34.394 Zero copy mechanism will not be used. 00:28:34.394 Running I/O for 2 seconds... 00:28:34.394 [2024-12-09 10:39:06.664164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.394 [2024-12-09 10:39:06.664267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.394 [2024-12-09 10:39:06.664311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.394 [2024-12-09 10:39:06.669695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.394 [2024-12-09 10:39:06.669779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.394 [2024-12-09 10:39:06.669832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.394 [2024-12-09 10:39:06.674904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.394 [2024-12-09 10:39:06.674985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.394 [2024-12-09 10:39:06.675021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.394 [2024-12-09 10:39:06.680018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.680121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.680170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.685004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.685093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.685128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.689977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.690057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.690093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.695010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.695104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.695154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.700737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.700809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.700842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.705737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.705853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.705888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.710616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.710714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.710748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.715462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.715539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.715581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.720453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.720550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.720585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.725573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.725649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.725684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.731186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.731312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.731342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.736794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.736891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.736928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.741711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.741800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.741836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.746824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.746923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.746958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.751783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.751872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.751908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.756749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.756848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.756881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.761671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.761776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.761810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.766802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.766883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.766918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.771689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.771763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.771799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.776599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.776675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.776710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.781427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.781504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.781538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.786325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.786434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.786469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.791349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.791465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.791499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.796383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.796491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.796529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.801693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.801833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.801864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.807404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.807784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.807814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.813837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.395 [2024-12-09 10:39:06.814172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.395 [2024-12-09 10:39:06.814203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.395 [2024-12-09 10:39:06.819791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.396 [2024-12-09 10:39:06.820052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.396 [2024-12-09 10:39:06.820083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.396 [2024-12-09 10:39:06.824446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.396 [2024-12-09 10:39:06.824753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.396 [2024-12-09 10:39:06.824793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.396 [2024-12-09 10:39:06.828993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.396 [2024-12-09 10:39:06.829230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.396 [2024-12-09 10:39:06.829261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.396 [2024-12-09 10:39:06.833953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.396 [2024-12-09 10:39:06.834197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.396 [2024-12-09 10:39:06.834228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.838851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.839088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.839119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.843221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.843450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.843480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.847586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.847808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.847844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.851961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.852164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.852194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.856296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.856581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.856611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.860646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.860873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.860903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.864943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.865169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.865200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.869321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.869606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.869637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.873758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.873969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.874015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.878248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.878473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.878504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.882727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.882924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.882955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.887682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.887908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.887953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.892503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.892739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.892769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.897509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.897729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.897759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.903071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.903384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.903416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.908829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.909094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.909144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.914738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.915112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.915153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.920838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.921174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.921206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.927322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.927607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.927638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.932684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.932858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.932888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.937624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.937816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.937846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.942520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.942744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.655 [2024-12-09 10:39:06.942774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.655 [2024-12-09 10:39:06.947912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.655 [2024-12-09 10:39:06.948160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.948202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.953152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.953364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.953394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.957791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.957980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.958010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.961961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.962209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.962239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.966064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.966285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.966316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.970224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.970445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.970474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.974350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.974567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.974603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.978507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.978724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.978755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.982652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.982870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.982900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.986906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.987164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.987194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.991055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.991316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.991347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.995173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.995399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.995436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:06.999315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:06.999544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:06.999574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.003556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.003805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.003835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.007745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.007962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.007992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.011910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.012120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.012159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.016062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.016291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.016321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.020432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.020671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.020701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.025349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.025634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.025664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.030544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.030843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.030872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.036190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.036471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.036501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.041091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.041280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.041310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.045208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.045391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.045421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.049662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.049853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.049883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.053906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.054101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.054131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.058212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.058413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.058443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.063264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.063647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.063692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.656 [2024-12-09 10:39:07.068357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.656 [2024-12-09 10:39:07.068690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.656 [2024-12-09 10:39:07.068720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.657 [2024-12-09 10:39:07.074227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.657 [2024-12-09 10:39:07.074541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.657 [2024-12-09 10:39:07.074571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.657 [2024-12-09 10:39:07.078768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.657 [2024-12-09 10:39:07.078970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.657 [2024-12-09 10:39:07.079000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.657 [2024-12-09 10:39:07.083063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.657 [2024-12-09 10:39:07.083290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.657 [2024-12-09 10:39:07.083320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.657 [2024-12-09 10:39:07.087362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.657 [2024-12-09 10:39:07.087599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.657 [2024-12-09 10:39:07.087629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.657 [2024-12-09 10:39:07.091843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.657 [2024-12-09 10:39:07.092073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.657 [2024-12-09 10:39:07.092108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.096263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.096548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.096577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.100632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.100860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.100890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.105242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.105532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.105563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.110449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.110709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.110738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.115036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.115295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.115325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.120434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.120702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.120732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.125090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.125371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.125402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.129860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.130193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.130224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.134898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.135258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.135289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.140119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.140436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.140466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.145327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.145668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.145698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.151332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.151627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.151671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.156053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.156269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.156300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.160345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.160580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.160610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.164785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.164995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.165024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.169189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.169395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.169427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.173663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.173857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.173887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.178213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.178449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.178479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.182796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.183000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.183030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.187431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.187641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.187671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.191893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.192113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.192150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.196228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.916 [2024-12-09 10:39:07.196424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.916 [2024-12-09 10:39:07.196454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.916 [2024-12-09 10:39:07.200643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.200847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.200877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.205073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.205371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.205402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.209303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.209542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.209571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.213890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.214126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.214171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.219044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.219343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.219374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.224255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.224526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.224556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.229986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.230258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.230288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.234627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.234844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.234873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.238907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.239153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.239184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.243371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.243563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.243593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.247792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.248020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.248050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.252287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.252553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.252584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.256460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.256675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.256705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.261252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.261545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.261575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.266300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.266644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.266674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.271894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.272230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.272261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.277388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.277613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.277643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.281736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.281948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.281979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.286191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.286398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.286429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.290668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.290853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.290884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.294911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.295108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.295152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.299743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.299971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.300001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.304896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.305187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.305217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.310098] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.310428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.310458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.315810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.316091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.316137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.321978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.917 [2024-12-09 10:39:07.322169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.917 [2024-12-09 10:39:07.322199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.917 [2024-12-09 10:39:07.327044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.918 [2024-12-09 10:39:07.327259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.918 [2024-12-09 10:39:07.327303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.918 [2024-12-09 10:39:07.331402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.918 [2024-12-09 10:39:07.331616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.918 [2024-12-09 10:39:07.331647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.918 [2024-12-09 10:39:07.335614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.918 [2024-12-09 10:39:07.335816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.918 [2024-12-09 10:39:07.335845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:34.918 [2024-12-09 10:39:07.339777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.918 [2024-12-09 10:39:07.339980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.918 [2024-12-09 10:39:07.340015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:34.918 [2024-12-09 10:39:07.343917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.918 [2024-12-09 10:39:07.344115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.918 [2024-12-09 10:39:07.344152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:34.918 [2024-12-09 10:39:07.347899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.918 [2024-12-09 10:39:07.348123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.918 [2024-12-09 10:39:07.348162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:34.918 [2024-12-09 10:39:07.352083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:34.918 [2024-12-09 10:39:07.352316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.918 [2024-12-09 10:39:07.352347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.356340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.356578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.356608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.360507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.360758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.360788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.364754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.364938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.364968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.368830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.369095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.369125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.372975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.373204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.373234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.377209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.377439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.377484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.381327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.381573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.381602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.385531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.385764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.385794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.389843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.390055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.390084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.393884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.394075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.394104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.398101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.398364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.398395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.402293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.402539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.402569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.406464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.406664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.406693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.410660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.410875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.410910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.414771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.415023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.415052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.418968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.419214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.419245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.423184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.423382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.423417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.427463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.427689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.427718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.431640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.431860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.431890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.435906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.436135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.436178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.176 [2024-12-09 10:39:07.440594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.176 [2024-12-09 10:39:07.440850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.176 [2024-12-09 10:39:07.440880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.445738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.446067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.446096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.451714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.452020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.452056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.457092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.457409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.457440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.462306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.462593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.462623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.467490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.467774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.467804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.472642] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.472988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.473019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.477915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.478203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.478234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.483287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.483545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.483576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.488500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.488708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.488739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.493658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.493963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.493992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.498945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.499310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.499340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.504252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.504457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.504487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.509522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.509859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.509890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.514783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.515019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.515050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.520136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.520378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.520409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.525439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.525689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.525719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.530276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.530499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.530529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.534581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.534833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.534864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.538807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.539004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.539034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.543022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.543269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.543301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.547404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.547632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.547662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.552149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.552417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.552447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.557279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.557477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.557507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.561745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.562066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.562097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.177 [2024-12-09 10:39:07.566974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.177 [2024-12-09 10:39:07.567314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.177 [2024-12-09 10:39:07.567345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.178 [2024-12-09 10:39:07.572118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.178 [2024-12-09 10:39:07.572370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.178 [2024-12-09 10:39:07.572400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.178 [2024-12-09 10:39:07.578027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.178 [2024-12-09 10:39:07.578304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.178 [2024-12-09 10:39:07.578335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.178 [2024-12-09 10:39:07.582388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.178 [2024-12-09 10:39:07.582638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.178 [2024-12-09 10:39:07.582673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.178 [2024-12-09 10:39:07.586617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.178 [2024-12-09 10:39:07.586838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.178 [2024-12-09 10:39:07.586868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.178 [2024-12-09 10:39:07.591133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.178 [2024-12-09 10:39:07.591426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.178 [2024-12-09 10:39:07.591456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.178 [2024-12-09 10:39:07.595497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.178 [2024-12-09 10:39:07.595734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.178 [2024-12-09 10:39:07.595764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.178 [2024-12-09 10:39:07.599948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.178 [2024-12-09 10:39:07.600206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.178 [2024-12-09 10:39:07.600238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.178 [2024-12-09 10:39:07.604384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.178 [2024-12-09 10:39:07.604615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.178 [2024-12-09 10:39:07.604645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.178 [2024-12-09 10:39:07.608476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.178 [2024-12-09 10:39:07.608666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.178 [2024-12-09 10:39:07.608695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.178 [2024-12-09 10:39:07.612615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.178 [2024-12-09 10:39:07.612856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.178 [2024-12-09 10:39:07.612887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.616891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.617104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.617135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.620984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.621201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.621233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.625164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.625385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.625416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.629360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.629587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.629617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.633607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.633802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.633832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.637638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.637850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.637879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.641828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.642022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.642052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.646004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.646208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.646238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.650130] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.650327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.650357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.654270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.654472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.654502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.658470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.658669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.658699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.662662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.662871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.662901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.436 6557.00 IOPS, 819.62 MiB/s [2024-12-09T09:39:07.877Z] [2024-12-09 10:39:07.667925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.668121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.668165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.672175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.672329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.672362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.676437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.676588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.676619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.680762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.680964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.436 [2024-12-09 10:39:07.680995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.436 [2024-12-09 10:39:07.685028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.436 [2024-12-09 10:39:07.685223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.685254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.689624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.689794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.689824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.694482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.694650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.694694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.699026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.699233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.699264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.703755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.703901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.703931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.708422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.708580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.708610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.712922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.713088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.713118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.717316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.717486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.717516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.722554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.722644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.722680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.727247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.727387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.727420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.731737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.731912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.731942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.736167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.736339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.736370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.740510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.740665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.740695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.744976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.745127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.745167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.749452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.749588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.749618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.753829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.753992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.754022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.758296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.758437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.758467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.763516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.763717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.763747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.768168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.768339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.768370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.772727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.772875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.772909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.777341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.777488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.777518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.781828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.781968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.781998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.786299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.786425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.786455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.790824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.790971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.791001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.795199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.795343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.795388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.799735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.799891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.799920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.804243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.804406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.804435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.437 [2024-12-09 10:39:07.808777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.437 [2024-12-09 10:39:07.809019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.437 [2024-12-09 10:39:07.809048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.813936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.814110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.814152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.819455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.819763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.819794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.825506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.825743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.825773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.830478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.830679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.830709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.835716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.835880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.835910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.840389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.840562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.840592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.845200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.845492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.845522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.850249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.850558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.850589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.855330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.855632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.855663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.860822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.861011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.861041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.866651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.866747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.866781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.871054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.871181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.871218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.438 [2024-12-09 10:39:07.875560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.438 [2024-12-09 10:39:07.875688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.438 [2024-12-09 10:39:07.875719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.880197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.880316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.880348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.884720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.884874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.884904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.889234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.889356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.889388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.893893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.893986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.894021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.898229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.898366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.898408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.902676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.902806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.902838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.907058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.907163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.907211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.911501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.911608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.911640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.916113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.916200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.916235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.920533] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.920638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.920673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.924996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.925072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.925123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.929485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.929586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.929620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.933945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.934044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.934079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.938388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.938550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.938581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.942923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.943073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.943103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.947476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.947640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.947670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.951838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.951960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.952004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.956261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.956385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.956416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.960525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.960621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.960658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.965172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.965280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.965313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.970073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.970157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.970199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.974367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.974451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.974489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.978513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.978586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.697 [2024-12-09 10:39:07.978625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.697 [2024-12-09 10:39:07.982875] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.697 [2024-12-09 10:39:07.982998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:07.983028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:07.987364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:07.987581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:07.987612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:07.992620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:07.992803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:07.992833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:07.998343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:07.998498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:07.998531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:08.004224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:08.004343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:08.004378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:08.008689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:08.008774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:08.008811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:08.013084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:08.013214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:08.013246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:08.017636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:08.017803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:08.017845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:08.022268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:08.022413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:08.022447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:08.026786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:08.026941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:08.026972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:08.031182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:08.031293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:08.031325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:08.035727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:08.035845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:08.035888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:08.040315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.698 [2024-12-09 10:39:08.040508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.698 [2024-12-09 10:39:08.040544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.698 [2024-12-09 10:39:08.044966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.715 [2024-12-09 10:39:08.045088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.715 [2024-12-09 10:39:08.045119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.715 [2024-12-09 10:39:08.049312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.715 [2024-12-09 10:39:08.049461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.715 [2024-12-09 10:39:08.049492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.715 [2024-12-09 10:39:08.054688] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.715 [2024-12-09 10:39:08.054869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.715 [2024-12-09 10:39:08.054898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.715 [2024-12-09 10:39:08.060053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.715 [2024-12-09 10:39:08.060206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.715 [2024-12-09 10:39:08.060236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.715 [2024-12-09 10:39:08.065281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.715 [2024-12-09 10:39:08.065408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.715 [2024-12-09 10:39:08.065453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.715 [2024-12-09 10:39:08.070433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.715 [2024-12-09 10:39:08.070618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.715 [2024-12-09 10:39:08.070649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.715 [2024-12-09 10:39:08.075655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.715 [2024-12-09 10:39:08.075824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.715 [2024-12-09 10:39:08.075853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.715 [2024-12-09 10:39:08.080874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.715 [2024-12-09 10:39:08.081048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.715 [2024-12-09 10:39:08.081092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.715 [2024-12-09 10:39:08.085920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.715 [2024-12-09 10:39:08.086048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.715 [2024-12-09 10:39:08.086082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.715 [2024-12-09 10:39:08.090702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.715 [2024-12-09 10:39:08.090878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.715 [2024-12-09 10:39:08.090908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.715 [2024-12-09 10:39:08.095788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.716 [2024-12-09 10:39:08.095967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.716 [2024-12-09 10:39:08.095996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.716 [2024-12-09 10:39:08.101025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.716 [2024-12-09 10:39:08.101191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.716 [2024-12-09 10:39:08.101222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.716 [2024-12-09 10:39:08.106212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.716 [2024-12-09 10:39:08.106380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.716 [2024-12-09 10:39:08.106410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.716 [2024-12-09 10:39:08.111252] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.716 [2024-12-09 10:39:08.111457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.716 [2024-12-09 10:39:08.111487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.716 [2024-12-09 10:39:08.116375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.716 [2024-12-09 10:39:08.116545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.716 [2024-12-09 10:39:08.116575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.716 [2024-12-09 10:39:08.121554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.716 [2024-12-09 10:39:08.121744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.716 [2024-12-09 10:39:08.121774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.716 [2024-12-09 10:39:08.126653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.716 [2024-12-09 10:39:08.126825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.716 [2024-12-09 10:39:08.126855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.716 [2024-12-09 10:39:08.131807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.716 [2024-12-09 10:39:08.131916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.716 [2024-12-09 10:39:08.131949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.716 [2024-12-09 10:39:08.136957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.137115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.137157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.142279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.142451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.142481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.147430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.147546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.147581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.152464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.152603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.152632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.157660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.157816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.157846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.162864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.163020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.163050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.168045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.168238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.168269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.173124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.173330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.173361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.178117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.178272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.178304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.183323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.183483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.183514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.188408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.188566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.188596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.193599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.193769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.193800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.199053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.199229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.199261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.204052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.204196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.204229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.209179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.209380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.209411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.214575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.214733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.214764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.220190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.220317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.220349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.225347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.225528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.225558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.230660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.230855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.230886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.235887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.236065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.236117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.241213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.241388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.241419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.246660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.246852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.246882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.252007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.252192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.252223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.976 [2024-12-09 10:39:08.257085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.976 [2024-12-09 10:39:08.257265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.976 [2024-12-09 10:39:08.257296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.262382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.262497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.262531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.267614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.267804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.267834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.272744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.272892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.272922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.277903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.278087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.278132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.283014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.283189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.283225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.288228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.288385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.288415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.293326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.293480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.293510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.298400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.298559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.298588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.303695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.303879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.303910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.308743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.308959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.308989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.314034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.314201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.314232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.319299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.319535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.319566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.324423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.324601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.324630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.329404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.329545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.329575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.334478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.334635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.334665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.339726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.339834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.339865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.344859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.345047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.345077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.349937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.350130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.350169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.355022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.355193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.355223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.360063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.360283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.360314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.365278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.365490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.365522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.370368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.370577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.370607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.375459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.375653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.375682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.380704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.380894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.380923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.385758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.385934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.385973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.390792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.977 [2024-12-09 10:39:08.390899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.977 [2024-12-09 10:39:08.390931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:35.977 [2024-12-09 10:39:08.395914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.978 [2024-12-09 10:39:08.396077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.978 [2024-12-09 10:39:08.396112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:35.978 [2024-12-09 10:39:08.401018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.978 [2024-12-09 10:39:08.401194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.978 [2024-12-09 10:39:08.401224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:35.978 [2024-12-09 10:39:08.406100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.978 [2024-12-09 10:39:08.406274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.978 [2024-12-09 10:39:08.406304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:35.978 [2024-12-09 10:39:08.411221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:35.978 [2024-12-09 10:39:08.411355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.978 [2024-12-09 10:39:08.411386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.416320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.416436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.416473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.421432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.421625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.421656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.426538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.426720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.426751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.431635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.431799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.431830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.436770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.436879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.436910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.441928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.442080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.442110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.447089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.447286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.447316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.452218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.452403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.452433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.457300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.457490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.457520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.462411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.462609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.462639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.467630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.467902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.467932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.472679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.472858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.472888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.477714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.477976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.478007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.482778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.483043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.483072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.487987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.488263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.488294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.493039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.493351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.493383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.498291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.498551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.498582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.503375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.503632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.503663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.508415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.508703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.508733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.513593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.513799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.513829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.518736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.238 [2024-12-09 10:39:08.518993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.238 [2024-12-09 10:39:08.519023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.238 [2024-12-09 10:39:08.523811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.524084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.524115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.528903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.529187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.529219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.533977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.534277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.534307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.539035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.539326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.539356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.544147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.544403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.544433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.549227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.549507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.549543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.554634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.554834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.554865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.560390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.560650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.560681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.564796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.564992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.565022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.569001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.569242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.569287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.573479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.573666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.573697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.578037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.578250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.578281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.582390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.582584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.582615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.586730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.586933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.586963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.591164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.591353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.591383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.595764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.595987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.596018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.599950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.600209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.600241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.604622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.604876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.604906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.609790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.610099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.610130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.614292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.614530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.614561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.619312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.619566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.619596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.624359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.624665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.624695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.629477] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.629736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.629766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.634834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.635072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.635102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.640491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.640689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.640720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.645655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.645919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.645948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.650717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.650942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.239 [2024-12-09 10:39:08.650972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:36.239 [2024-12-09 10:39:08.655906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.239 [2024-12-09 10:39:08.656172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.240 [2024-12-09 10:39:08.656202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:36.240 [2024-12-09 10:39:08.660849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.240 [2024-12-09 10:39:08.661104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.240 [2024-12-09 10:39:08.661158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:36.240 [2024-12-09 10:39:08.665938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1351170) with pdu=0x200016eff3c8 00:28:36.240 [2024-12-09 10:39:08.666224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.240 [2024-12-09 10:39:08.666255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:36.240 6437.00 IOPS, 804.62 MiB/s 00:28:36.240 Latency(us) 00:28:36.240 [2024-12-09T09:39:08.681Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.240 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:36.240 nvme0n1 : 2.00 6432.76 804.09 0.00 0.00 2479.88 1711.22 6650.69 00:28:36.240 [2024-12-09T09:39:08.681Z] =================================================================================================================== 00:28:36.240 [2024-12-09T09:39:08.681Z] Total : 6432.76 804.09 0.00 0.00 2479.88 1711.22 6650.69 00:28:36.240 { 00:28:36.240 "results": [ 00:28:36.240 { 00:28:36.240 "job": "nvme0n1", 00:28:36.240 "core_mask": "0x2", 00:28:36.240 "workload": "randwrite", 00:28:36.240 "status": "finished", 00:28:36.240 "queue_depth": 16, 00:28:36.240 "io_size": 131072, 00:28:36.240 "runtime": 2.003806, 00:28:36.240 "iops": 6432.758460649384, 00:28:36.240 "mibps": 804.094807581173, 00:28:36.240 "io_failed": 0, 00:28:36.240 "io_timeout": 0, 00:28:36.240 "avg_latency_us": 2479.880790046835, 00:28:36.240 "min_latency_us": 1711.2177777777779, 00:28:36.240 "max_latency_us": 6650.69037037037 00:28:36.240 } 00:28:36.240 ], 00:28:36.240 "core_count": 1 00:28:36.240 } 00:28:36.498 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:36.498 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:36.498 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:36.498 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:36.498 | .driver_specific 00:28:36.498 | .nvme_error 00:28:36.498 | .status_code 00:28:36.498 | .command_transient_transport_error' 00:28:36.755 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 416 > 0 )) 00:28:36.755 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2650112 00:28:36.755 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2650112 ']' 00:28:36.755 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2650112 00:28:36.755 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:36.755 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.755 10:39:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2650112 00:28:36.755 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:36.755 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:36.755 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2650112' 00:28:36.755 killing process with pid 2650112 00:28:36.755 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2650112 00:28:36.755 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.755 00:28:36.755 Latency(us) 00:28:36.755 [2024-12-09T09:39:09.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.755 [2024-12-09T09:39:09.196Z] =================================================================================================================== 00:28:36.755 [2024-12-09T09:39:09.196Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.755 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2650112 00:28:37.013 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2648633 00:28:37.013 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2648633 ']' 00:28:37.013 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2648633 00:28:37.013 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:28:37.013 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:37.013 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2648633 00:28:37.013 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:37.013 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:37.013 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2648633' 00:28:37.013 killing process with pid 2648633 00:28:37.013 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2648633 00:28:37.013 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2648633 00:28:37.296 00:28:37.296 real 0m15.762s 00:28:37.296 user 0m31.477s 00:28:37.296 sys 0m4.385s 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:37.296 ************************************ 00:28:37.296 END TEST nvmf_digest_error 00:28:37.296 ************************************ 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:37.296 rmmod nvme_tcp 00:28:37.296 rmmod nvme_fabrics 00:28:37.296 rmmod nvme_keyring 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2648633 ']' 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2648633 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2648633 ']' 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2648633 00:28:37.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2648633) - No such process 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2648633 is not found' 00:28:37.296 Process with pid 2648633 is not found 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.296 10:39:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:39.832 00:28:39.832 real 0m36.329s 00:28:39.832 user 1m3.472s 00:28:39.832 sys 0m10.533s 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:39.832 ************************************ 00:28:39.832 END TEST nvmf_digest 00:28:39.832 ************************************ 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.832 ************************************ 00:28:39.832 START TEST nvmf_bdevperf 00:28:39.832 ************************************ 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:39.832 * Looking for test storage... 00:28:39.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:39.832 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:39.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.833 --rc genhtml_branch_coverage=1 00:28:39.833 --rc genhtml_function_coverage=1 00:28:39.833 --rc genhtml_legend=1 00:28:39.833 --rc geninfo_all_blocks=1 00:28:39.833 --rc geninfo_unexecuted_blocks=1 00:28:39.833 00:28:39.833 ' 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:39.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.833 --rc genhtml_branch_coverage=1 00:28:39.833 --rc genhtml_function_coverage=1 00:28:39.833 --rc genhtml_legend=1 00:28:39.833 --rc geninfo_all_blocks=1 00:28:39.833 --rc geninfo_unexecuted_blocks=1 00:28:39.833 00:28:39.833 ' 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:39.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.833 --rc genhtml_branch_coverage=1 00:28:39.833 --rc genhtml_function_coverage=1 00:28:39.833 --rc genhtml_legend=1 00:28:39.833 --rc geninfo_all_blocks=1 00:28:39.833 --rc geninfo_unexecuted_blocks=1 00:28:39.833 00:28:39.833 ' 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:39.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.833 --rc genhtml_branch_coverage=1 00:28:39.833 --rc genhtml_function_coverage=1 00:28:39.833 --rc genhtml_legend=1 00:28:39.833 --rc geninfo_all_blocks=1 00:28:39.833 --rc geninfo_unexecuted_blocks=1 00:28:39.833 00:28:39.833 ' 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:39.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:39.833 10:39:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:41.731 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:41.732 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:41.732 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:41.732 Found net devices under 0000:09:00.0: cvl_0_0 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:41.732 Found net devices under 0000:09:00.1: cvl_0_1 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:41.732 10:39:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:41.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:41.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:28:41.732 00:28:41.732 --- 10.0.0.2 ping statistics --- 00:28:41.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.732 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:41.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:41.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:28:41.732 00:28:41.732 --- 10.0.0.1 ping statistics --- 00:28:41.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:41.732 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2653103 00:28:41.732 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:41.733 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2653103 00:28:41.733 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2653103 ']' 00:28:41.733 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:41.733 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:41.733 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:41.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:41.733 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:41.733 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.733 [2024-12-09 10:39:14.089272] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:41.733 [2024-12-09 10:39:14.089360] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:41.733 [2024-12-09 10:39:14.159477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:41.990 [2024-12-09 10:39:14.218111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:41.990 [2024-12-09 10:39:14.218167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:41.990 [2024-12-09 10:39:14.218188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:41.990 [2024-12-09 10:39:14.218205] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:41.990 [2024-12-09 10:39:14.218219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:41.990 [2024-12-09 10:39:14.219816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:41.990 [2024-12-09 10:39:14.219879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:41.990 [2024-12-09 10:39:14.219882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.990 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:41.990 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:41.990 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:41.990 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:41.990 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.990 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.990 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.991 [2024-12-09 10:39:14.373858] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.991 Malloc0 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.991 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:42.248 [2024-12-09 10:39:14.441928] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.248 { 00:28:42.248 "params": { 00:28:42.248 "name": "Nvme$subsystem", 00:28:42.248 "trtype": "$TEST_TRANSPORT", 00:28:42.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.248 "adrfam": "ipv4", 00:28:42.248 "trsvcid": "$NVMF_PORT", 00:28:42.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.248 "hdgst": ${hdgst:-false}, 00:28:42.248 "ddgst": ${ddgst:-false} 00:28:42.248 }, 00:28:42.248 "method": "bdev_nvme_attach_controller" 00:28:42.248 } 00:28:42.248 EOF 00:28:42.248 )") 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:42.248 10:39:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:42.248 "params": { 00:28:42.248 "name": "Nvme1", 00:28:42.248 "trtype": "tcp", 00:28:42.248 "traddr": "10.0.0.2", 00:28:42.248 "adrfam": "ipv4", 00:28:42.248 "trsvcid": "4420", 00:28:42.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:42.248 "hdgst": false, 00:28:42.248 "ddgst": false 00:28:42.248 }, 00:28:42.248 "method": "bdev_nvme_attach_controller" 00:28:42.248 }' 00:28:42.248 [2024-12-09 10:39:14.494067] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:42.248 [2024-12-09 10:39:14.494178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2653128 ] 00:28:42.248 [2024-12-09 10:39:14.562338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.248 [2024-12-09 10:39:14.624348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.812 Running I/O for 1 seconds... 00:28:43.744 8493.00 IOPS, 33.18 MiB/s 00:28:43.744 Latency(us) 00:28:43.744 [2024-12-09T09:39:16.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.744 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.744 Verification LBA range: start 0x0 length 0x4000 00:28:43.744 Nvme1n1 : 1.02 8515.57 33.26 0.00 0.00 14961.85 2888.44 13010.11 00:28:43.744 [2024-12-09T09:39:16.185Z] =================================================================================================================== 00:28:43.744 [2024-12-09T09:39:16.185Z] Total : 8515.57 33.26 0.00 0.00 14961.85 2888.44 13010.11 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2653391 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.000 { 00:28:44.000 "params": { 00:28:44.000 "name": "Nvme$subsystem", 00:28:44.000 "trtype": "$TEST_TRANSPORT", 00:28:44.000 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.000 "adrfam": "ipv4", 00:28:44.000 "trsvcid": "$NVMF_PORT", 00:28:44.000 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.000 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.000 "hdgst": ${hdgst:-false}, 00:28:44.000 "ddgst": ${ddgst:-false} 00:28:44.000 }, 00:28:44.000 "method": "bdev_nvme_attach_controller" 00:28:44.000 } 00:28:44.000 EOF 00:28:44.000 )") 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:44.000 10:39:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:44.000 "params": { 00:28:44.000 "name": "Nvme1", 00:28:44.000 "trtype": "tcp", 00:28:44.000 "traddr": "10.0.0.2", 00:28:44.000 "adrfam": "ipv4", 00:28:44.001 "trsvcid": "4420", 00:28:44.001 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.001 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.001 "hdgst": false, 00:28:44.001 "ddgst": false 00:28:44.001 }, 00:28:44.001 "method": "bdev_nvme_attach_controller" 00:28:44.001 }' 00:28:44.001 [2024-12-09 10:39:16.295553] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:44.001 [2024-12-09 10:39:16.295633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2653391 ] 00:28:44.001 [2024-12-09 10:39:16.363088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.001 [2024-12-09 10:39:16.420734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.258 Running I/O for 15 seconds... 00:28:46.565 8493.00 IOPS, 33.18 MiB/s [2024-12-09T09:39:19.265Z] 8606.50 IOPS, 33.62 MiB/s [2024-12-09T09:39:19.265Z] 10:39:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2653103 00:28:46.824 10:39:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:46.824 [2024-12-09 10:39:19.263725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.263775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.263807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.263824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.263841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.263855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.263871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.263886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.263901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.263915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.263957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.263972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.263986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.824 [2024-12-09 10:39:19.264816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.824 [2024-12-09 10:39:19.264829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:46.825 [2024-12-09 10:39:19.264842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.086 [2024-12-09 10:39:19.264855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.264869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.086 [2024-12-09 10:39:19.264881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.264899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.086 [2024-12-09 10:39:19.264913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.264942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.086 [2024-12-09 10:39:19.264955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.264969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.086 [2024-12-09 10:39:19.264982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.264998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.086 [2024-12-09 10:39:19.265011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.265025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.086 [2024-12-09 10:39:19.265038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.265067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.086 [2024-12-09 10:39:19.265081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.265096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.086 [2024-12-09 10:39:19.265108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.265123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.086 [2024-12-09 10:39:19.265164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.265181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.086 [2024-12-09 10:39:19.265198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.265213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.086 [2024-12-09 10:39:19.265226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.265241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.086 [2024-12-09 10:39:19.265254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.265269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.086 [2024-12-09 10:39:19.265282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.265297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.086 [2024-12-09 10:39:19.265310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.086 [2024-12-09 10:39:19.265329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.265976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.265988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.266013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.266041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.266066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.266091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.087 [2024-12-09 10:39:19.266131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.087 [2024-12-09 10:39:19.266171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.087 [2024-12-09 10:39:19.266199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.087 [2024-12-09 10:39:19.266228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.087 [2024-12-09 10:39:19.266256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.087 [2024-12-09 10:39:19.266284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.087 [2024-12-09 10:39:19.266312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.087 [2024-12-09 10:39:19.266340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.087 [2024-12-09 10:39:19.266368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.087 [2024-12-09 10:39:19.266407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.087 [2024-12-09 10:39:19.266438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.087 [2024-12-09 10:39:19.266451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.088 [2024-12-09 10:39:19.266619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.266972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.266985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.088 [2024-12-09 10:39:19.267667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.088 [2024-12-09 10:39:19.267696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.089 [2024-12-09 10:39:19.267709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.089 [2024-12-09 10:39:19.267723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.089 [2024-12-09 10:39:19.267736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.089 [2024-12-09 10:39:19.267765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.089 [2024-12-09 10:39:19.267777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.089 [2024-12-09 10:39:19.267792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.089 [2024-12-09 10:39:19.267813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.089 [2024-12-09 10:39:19.267834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.089 [2024-12-09 10:39:19.267862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.089 [2024-12-09 10:39:19.267878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:47.089 [2024-12-09 10:39:19.267892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.089 [2024-12-09 10:39:19.267906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb713a0 is same with the state(6) to be set 00:28:47.089 [2024-12-09 10:39:19.267922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:47.089 [2024-12-09 10:39:19.267934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:47.089 [2024-12-09 10:39:19.267957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47968 len:8 PRP1 0x0 PRP2 0x0 00:28:47.089 [2024-12-09 10:39:19.267980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.089 [2024-12-09 10:39:19.271459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.089 [2024-12-09 10:39:19.271540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.089 [2024-12-09 10:39:19.272301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.089 [2024-12-09 10:39:19.272334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.089 [2024-12-09 10:39:19.272352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.089 [2024-12-09 10:39:19.272603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.089 [2024-12-09 10:39:19.272817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.089 [2024-12-09 10:39:19.272836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.089 [2024-12-09 10:39:19.272851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.089 [2024-12-09 10:39:19.272867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.089 [2024-12-09 10:39:19.285072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.089 [2024-12-09 10:39:19.285511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.089 [2024-12-09 10:39:19.285539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.089 [2024-12-09 10:39:19.285569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.089 [2024-12-09 10:39:19.285808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.089 [2024-12-09 10:39:19.286005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.089 [2024-12-09 10:39:19.286024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.089 [2024-12-09 10:39:19.286036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.089 [2024-12-09 10:39:19.286048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.089 [2024-12-09 10:39:19.298313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.089 [2024-12-09 10:39:19.298660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.089 [2024-12-09 10:39:19.298689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.089 [2024-12-09 10:39:19.298704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.089 [2024-12-09 10:39:19.298930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.089 [2024-12-09 10:39:19.299168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.089 [2024-12-09 10:39:19.299204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.089 [2024-12-09 10:39:19.299218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.089 [2024-12-09 10:39:19.299230] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.089 [2024-12-09 10:39:19.311551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.089 [2024-12-09 10:39:19.311922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.089 [2024-12-09 10:39:19.311950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.089 [2024-12-09 10:39:19.311966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.089 [2024-12-09 10:39:19.312215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.089 [2024-12-09 10:39:19.312418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.089 [2024-12-09 10:39:19.312442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.089 [2024-12-09 10:39:19.312455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.089 [2024-12-09 10:39:19.312466] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.089 [2024-12-09 10:39:19.324713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.089 [2024-12-09 10:39:19.325198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.089 [2024-12-09 10:39:19.325227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.089 [2024-12-09 10:39:19.325243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.089 [2024-12-09 10:39:19.325478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.089 [2024-12-09 10:39:19.325690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.089 [2024-12-09 10:39:19.325709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.089 [2024-12-09 10:39:19.325721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.089 [2024-12-09 10:39:19.325732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.089 [2024-12-09 10:39:19.337898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.089 [2024-12-09 10:39:19.338270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.089 [2024-12-09 10:39:19.338299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.089 [2024-12-09 10:39:19.338314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.089 [2024-12-09 10:39:19.338553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.089 [2024-12-09 10:39:19.338764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.089 [2024-12-09 10:39:19.338783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.089 [2024-12-09 10:39:19.338795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.089 [2024-12-09 10:39:19.338806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.089 [2024-12-09 10:39:19.351196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.089 [2024-12-09 10:39:19.351623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.089 [2024-12-09 10:39:19.351664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.089 [2024-12-09 10:39:19.351681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.089 [2024-12-09 10:39:19.351924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.089 [2024-12-09 10:39:19.352134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.089 [2024-12-09 10:39:19.352178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.089 [2024-12-09 10:39:19.352192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.089 [2024-12-09 10:39:19.352208] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.089 [2024-12-09 10:39:19.364485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.089 [2024-12-09 10:39:19.364932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.089 [2024-12-09 10:39:19.364960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.089 [2024-12-09 10:39:19.364976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.089 [2024-12-09 10:39:19.365226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.089 [2024-12-09 10:39:19.365429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.089 [2024-12-09 10:39:19.365463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.090 [2024-12-09 10:39:19.365475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.090 [2024-12-09 10:39:19.365487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.090 [2024-12-09 10:39:19.377676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.090 [2024-12-09 10:39:19.378083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.090 [2024-12-09 10:39:19.378125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.090 [2024-12-09 10:39:19.378150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.090 [2024-12-09 10:39:19.378412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.090 [2024-12-09 10:39:19.378644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.090 [2024-12-09 10:39:19.378663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.090 [2024-12-09 10:39:19.378675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.090 [2024-12-09 10:39:19.378686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.090 [2024-12-09 10:39:19.391044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.090 [2024-12-09 10:39:19.391469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.090 [2024-12-09 10:39:19.391496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.090 [2024-12-09 10:39:19.391511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.090 [2024-12-09 10:39:19.391750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.090 [2024-12-09 10:39:19.391962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.090 [2024-12-09 10:39:19.391980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.090 [2024-12-09 10:39:19.391992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.090 [2024-12-09 10:39:19.392003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.090 [2024-12-09 10:39:19.404263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.090 [2024-12-09 10:39:19.404616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.090 [2024-12-09 10:39:19.404649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.090 [2024-12-09 10:39:19.404666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.090 [2024-12-09 10:39:19.404903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.090 [2024-12-09 10:39:19.405115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.090 [2024-12-09 10:39:19.405135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.090 [2024-12-09 10:39:19.405177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.090 [2024-12-09 10:39:19.405189] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.090 [2024-12-09 10:39:19.417473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.090 [2024-12-09 10:39:19.417892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.090 [2024-12-09 10:39:19.417941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.090 [2024-12-09 10:39:19.417956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.090 [2024-12-09 10:39:19.418219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.090 [2024-12-09 10:39:19.418427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.090 [2024-12-09 10:39:19.418461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.090 [2024-12-09 10:39:19.418474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.090 [2024-12-09 10:39:19.418485] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.090 [2024-12-09 10:39:19.430816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.090 [2024-12-09 10:39:19.431190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.090 [2024-12-09 10:39:19.431219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.090 [2024-12-09 10:39:19.431236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.090 [2024-12-09 10:39:19.431483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.090 [2024-12-09 10:39:19.431695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.090 [2024-12-09 10:39:19.431714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.090 [2024-12-09 10:39:19.431726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.090 [2024-12-09 10:39:19.431737] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.090 [2024-12-09 10:39:19.444396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.090 [2024-12-09 10:39:19.444774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.090 [2024-12-09 10:39:19.444817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.090 [2024-12-09 10:39:19.444834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.090 [2024-12-09 10:39:19.445065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.090 [2024-12-09 10:39:19.445289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.090 [2024-12-09 10:39:19.445309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.090 [2024-12-09 10:39:19.445321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.090 [2024-12-09 10:39:19.445333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.090 [2024-12-09 10:39:19.457673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.090 [2024-12-09 10:39:19.458103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.090 [2024-12-09 10:39:19.458167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.090 [2024-12-09 10:39:19.458183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.090 [2024-12-09 10:39:19.458415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.090 [2024-12-09 10:39:19.458626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.090 [2024-12-09 10:39:19.458645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.090 [2024-12-09 10:39:19.458656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.090 [2024-12-09 10:39:19.458667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.090 [2024-12-09 10:39:19.470854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.090 [2024-12-09 10:39:19.471273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.090 [2024-12-09 10:39:19.471303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.090 [2024-12-09 10:39:19.471319] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.090 [2024-12-09 10:39:19.471548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.090 [2024-12-09 10:39:19.471759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.090 [2024-12-09 10:39:19.471777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.090 [2024-12-09 10:39:19.471789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.090 [2024-12-09 10:39:19.471800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.090 [2024-12-09 10:39:19.484116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.090 [2024-12-09 10:39:19.484533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.090 [2024-12-09 10:39:19.484576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.090 [2024-12-09 10:39:19.484592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.090 [2024-12-09 10:39:19.484845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.090 [2024-12-09 10:39:19.485041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.090 [2024-12-09 10:39:19.485064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.090 [2024-12-09 10:39:19.485077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.090 [2024-12-09 10:39:19.485088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.090 [2024-12-09 10:39:19.497361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.090 [2024-12-09 10:39:19.497747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.090 [2024-12-09 10:39:19.497775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.090 [2024-12-09 10:39:19.497791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.090 [2024-12-09 10:39:19.498009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.090 [2024-12-09 10:39:19.498230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.091 [2024-12-09 10:39:19.498250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.091 [2024-12-09 10:39:19.498262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.091 [2024-12-09 10:39:19.498273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.091 [2024-12-09 10:39:19.510608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.091 [2024-12-09 10:39:19.511036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.091 [2024-12-09 10:39:19.511063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.091 [2024-12-09 10:39:19.511079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.091 [2024-12-09 10:39:19.511347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.091 [2024-12-09 10:39:19.511582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.091 [2024-12-09 10:39:19.511601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.091 [2024-12-09 10:39:19.511613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.091 [2024-12-09 10:39:19.511624] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.091 [2024-12-09 10:39:19.524357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.351 [2024-12-09 10:39:19.524815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.351 [2024-12-09 10:39:19.524884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.351 [2024-12-09 10:39:19.524901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.351 [2024-12-09 10:39:19.525119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.351 [2024-12-09 10:39:19.525353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.351 [2024-12-09 10:39:19.525376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.351 [2024-12-09 10:39:19.525390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.351 [2024-12-09 10:39:19.525402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.351 [2024-12-09 10:39:19.538316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.351 [2024-12-09 10:39:19.538853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.351 [2024-12-09 10:39:19.538883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.351 [2024-12-09 10:39:19.538900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.351 [2024-12-09 10:39:19.539118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.351 [2024-12-09 10:39:19.539352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.351 [2024-12-09 10:39:19.539375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.351 [2024-12-09 10:39:19.539389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.351 [2024-12-09 10:39:19.539402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.351 [2024-12-09 10:39:19.551802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.351 [2024-12-09 10:39:19.552153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.351 [2024-12-09 10:39:19.552183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.351 [2024-12-09 10:39:19.552199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.351 [2024-12-09 10:39:19.552416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.351 [2024-12-09 10:39:19.552640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.351 [2024-12-09 10:39:19.552660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.351 [2024-12-09 10:39:19.552673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.351 [2024-12-09 10:39:19.552684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.351 [2024-12-09 10:39:19.565182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.351 [2024-12-09 10:39:19.565621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.351 [2024-12-09 10:39:19.565664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.351 [2024-12-09 10:39:19.565680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.351 [2024-12-09 10:39:19.565951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.351 [2024-12-09 10:39:19.566187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.351 [2024-12-09 10:39:19.566217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.351 [2024-12-09 10:39:19.566231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.351 [2024-12-09 10:39:19.566244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.351 [2024-12-09 10:39:19.578675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.351 [2024-12-09 10:39:19.579053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.351 [2024-12-09 10:39:19.579100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.351 [2024-12-09 10:39:19.579116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.351 [2024-12-09 10:39:19.579370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.351 [2024-12-09 10:39:19.579597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.351 [2024-12-09 10:39:19.579617] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.351 [2024-12-09 10:39:19.579629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.351 [2024-12-09 10:39:19.579640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.351 [2024-12-09 10:39:19.592063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.351 [2024-12-09 10:39:19.592475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.351 [2024-12-09 10:39:19.592504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.351 [2024-12-09 10:39:19.592520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.351 [2024-12-09 10:39:19.592751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.351 [2024-12-09 10:39:19.592968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.351 [2024-12-09 10:39:19.592988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.351 [2024-12-09 10:39:19.593000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.351 [2024-12-09 10:39:19.593011] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.351 [2024-12-09 10:39:19.605548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.351 [2024-12-09 10:39:19.605917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.351 [2024-12-09 10:39:19.605968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.351 [2024-12-09 10:39:19.605983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.351 [2024-12-09 10:39:19.606222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.351 [2024-12-09 10:39:19.606460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.351 [2024-12-09 10:39:19.606495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.351 [2024-12-09 10:39:19.606508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.351 [2024-12-09 10:39:19.606520] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.351 [2024-12-09 10:39:19.619005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.351 [2024-12-09 10:39:19.619369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.351 [2024-12-09 10:39:19.619398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.351 [2024-12-09 10:39:19.619414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.351 [2024-12-09 10:39:19.619657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.351 [2024-12-09 10:39:19.619882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.351 [2024-12-09 10:39:19.619902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.351 [2024-12-09 10:39:19.619915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.351 [2024-12-09 10:39:19.619927] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.351 [2024-12-09 10:39:19.632447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.351 [2024-12-09 10:39:19.632874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.351 [2024-12-09 10:39:19.632903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.351 [2024-12-09 10:39:19.632919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.351 [2024-12-09 10:39:19.633159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.351 [2024-12-09 10:39:19.633376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.351 [2024-12-09 10:39:19.633396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.351 [2024-12-09 10:39:19.633410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.351 [2024-12-09 10:39:19.633437] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.351 [2024-12-09 10:39:19.645793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.352 [2024-12-09 10:39:19.646300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.352 [2024-12-09 10:39:19.646329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.352 [2024-12-09 10:39:19.646346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.352 [2024-12-09 10:39:19.646580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.352 [2024-12-09 10:39:19.646798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.352 [2024-12-09 10:39:19.646817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.352 [2024-12-09 10:39:19.646830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.352 [2024-12-09 10:39:19.646841] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.352 [2024-12-09 10:39:19.659071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.352 [2024-12-09 10:39:19.659481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.352 [2024-12-09 10:39:19.659524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.352 [2024-12-09 10:39:19.659541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.352 [2024-12-09 10:39:19.659790] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.352 [2024-12-09 10:39:19.659992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.352 [2024-12-09 10:39:19.660011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.352 [2024-12-09 10:39:19.660029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.352 [2024-12-09 10:39:19.660042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.352 7429.67 IOPS, 29.02 MiB/s [2024-12-09T09:39:19.793Z] [2024-12-09 10:39:19.672377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.352 [2024-12-09 10:39:19.672728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.352 [2024-12-09 10:39:19.672757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.352 [2024-12-09 10:39:19.672773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.352 [2024-12-09 10:39:19.673006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.352 [2024-12-09 10:39:19.673266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.352 [2024-12-09 10:39:19.673287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.352 [2024-12-09 10:39:19.673300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.352 [2024-12-09 10:39:19.673314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.352 [2024-12-09 10:39:19.685736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.352 [2024-12-09 10:39:19.686109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.352 [2024-12-09 10:39:19.686145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.352 [2024-12-09 10:39:19.686164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.352 [2024-12-09 10:39:19.686392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.352 [2024-12-09 10:39:19.686610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.352 [2024-12-09 10:39:19.686629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.352 [2024-12-09 10:39:19.686641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.352 [2024-12-09 10:39:19.686652] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.352 [2024-12-09 10:39:19.699145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.352 [2024-12-09 10:39:19.699509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.352 [2024-12-09 10:39:19.699538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.352 [2024-12-09 10:39:19.699553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.352 [2024-12-09 10:39:19.699787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.352 [2024-12-09 10:39:19.700005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.352 [2024-12-09 10:39:19.700024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.352 [2024-12-09 10:39:19.700036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.352 [2024-12-09 10:39:19.700048] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.352 [2024-12-09 10:39:19.712622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.352 [2024-12-09 10:39:19.713063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.352 [2024-12-09 10:39:19.713091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.352 [2024-12-09 10:39:19.713107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.352 [2024-12-09 10:39:19.713348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.352 [2024-12-09 10:39:19.713588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.352 [2024-12-09 10:39:19.713607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.352 [2024-12-09 10:39:19.713619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.352 [2024-12-09 10:39:19.713630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.352 [2024-12-09 10:39:19.725968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.352 [2024-12-09 10:39:19.726351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.352 [2024-12-09 10:39:19.726379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.352 [2024-12-09 10:39:19.726395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.352 [2024-12-09 10:39:19.726638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.352 [2024-12-09 10:39:19.726856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.352 [2024-12-09 10:39:19.726875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.352 [2024-12-09 10:39:19.726887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.352 [2024-12-09 10:39:19.726899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.352 [2024-12-09 10:39:19.739258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.352 [2024-12-09 10:39:19.739619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.352 [2024-12-09 10:39:19.739648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.352 [2024-12-09 10:39:19.739664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.352 [2024-12-09 10:39:19.739909] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.352 [2024-12-09 10:39:19.740111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.352 [2024-12-09 10:39:19.740130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.352 [2024-12-09 10:39:19.740166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.352 [2024-12-09 10:39:19.740181] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.352 [2024-12-09 10:39:19.752618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.352 [2024-12-09 10:39:19.753056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.352 [2024-12-09 10:39:19.753090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.352 [2024-12-09 10:39:19.753106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.352 [2024-12-09 10:39:19.753360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.352 [2024-12-09 10:39:19.753580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.352 [2024-12-09 10:39:19.753599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.352 [2024-12-09 10:39:19.753612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.352 [2024-12-09 10:39:19.753623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.352 [2024-12-09 10:39:19.765930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.352 [2024-12-09 10:39:19.766378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.352 [2024-12-09 10:39:19.766407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.352 [2024-12-09 10:39:19.766423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.352 [2024-12-09 10:39:19.766668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.352 [2024-12-09 10:39:19.766871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.352 [2024-12-09 10:39:19.766890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.352 [2024-12-09 10:39:19.766902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.352 [2024-12-09 10:39:19.766913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.352 [2024-12-09 10:39:19.779193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.353 [2024-12-09 10:39:19.779596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.353 [2024-12-09 10:39:19.779624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.353 [2024-12-09 10:39:19.779640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.353 [2024-12-09 10:39:19.779857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.353 [2024-12-09 10:39:19.780104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.353 [2024-12-09 10:39:19.780126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.353 [2024-12-09 10:39:19.780146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.353 [2024-12-09 10:39:19.780161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.613 [2024-12-09 10:39:19.792873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.613 [2024-12-09 10:39:19.793225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-12-09 10:39:19.793257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.613 [2024-12-09 10:39:19.793274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.613 [2024-12-09 10:39:19.793526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.613 [2024-12-09 10:39:19.793729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.613 [2024-12-09 10:39:19.793748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.613 [2024-12-09 10:39:19.793760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.613 [2024-12-09 10:39:19.793772] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.613 [2024-12-09 10:39:19.806256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.613 [2024-12-09 10:39:19.806642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-12-09 10:39:19.806673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.613 [2024-12-09 10:39:19.806690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.613 [2024-12-09 10:39:19.806936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.613 [2024-12-09 10:39:19.807180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.613 [2024-12-09 10:39:19.807201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.613 [2024-12-09 10:39:19.807214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.613 [2024-12-09 10:39:19.807226] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.613 [2024-12-09 10:39:19.819660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.613 [2024-12-09 10:39:19.820019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-12-09 10:39:19.820049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.613 [2024-12-09 10:39:19.820065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.613 [2024-12-09 10:39:19.820307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.613 [2024-12-09 10:39:19.820550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.613 [2024-12-09 10:39:19.820570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.613 [2024-12-09 10:39:19.820582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.613 [2024-12-09 10:39:19.820594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.613 [2024-12-09 10:39:19.833013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.613 [2024-12-09 10:39:19.833399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-12-09 10:39:19.833443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.613 [2024-12-09 10:39:19.833458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.613 [2024-12-09 10:39:19.833715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.613 [2024-12-09 10:39:19.833916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.613 [2024-12-09 10:39:19.833936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.613 [2024-12-09 10:39:19.833956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.613 [2024-12-09 10:39:19.833969] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.613 [2024-12-09 10:39:19.846328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.613 [2024-12-09 10:39:19.846691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-12-09 10:39:19.846719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.613 [2024-12-09 10:39:19.846735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.613 [2024-12-09 10:39:19.846980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.613 [2024-12-09 10:39:19.847209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.614 [2024-12-09 10:39:19.847230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.614 [2024-12-09 10:39:19.847243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.614 [2024-12-09 10:39:19.847255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.614 [2024-12-09 10:39:19.859697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.614 [2024-12-09 10:39:19.860144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-12-09 10:39:19.860173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.614 [2024-12-09 10:39:19.860190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.614 [2024-12-09 10:39:19.860435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.614 [2024-12-09 10:39:19.860637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.614 [2024-12-09 10:39:19.860656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.614 [2024-12-09 10:39:19.860668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.614 [2024-12-09 10:39:19.860679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.614 [2024-12-09 10:39:19.873079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.614 [2024-12-09 10:39:19.873526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-12-09 10:39:19.873555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.614 [2024-12-09 10:39:19.873571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.614 [2024-12-09 10:39:19.873816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.614 [2024-12-09 10:39:19.874033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.614 [2024-12-09 10:39:19.874053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.614 [2024-12-09 10:39:19.874065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.614 [2024-12-09 10:39:19.874077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.614 [2024-12-09 10:39:19.886405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.614 [2024-12-09 10:39:19.886858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-12-09 10:39:19.886887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.614 [2024-12-09 10:39:19.886903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.614 [2024-12-09 10:39:19.887160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.614 [2024-12-09 10:39:19.887390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.614 [2024-12-09 10:39:19.887410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.614 [2024-12-09 10:39:19.887424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.614 [2024-12-09 10:39:19.887436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.614 [2024-12-09 10:39:19.899714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.614 [2024-12-09 10:39:19.900093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-12-09 10:39:19.900121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.614 [2024-12-09 10:39:19.900137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.614 [2024-12-09 10:39:19.900382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.614 [2024-12-09 10:39:19.900619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.614 [2024-12-09 10:39:19.900639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.614 [2024-12-09 10:39:19.900651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.614 [2024-12-09 10:39:19.900662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.614 [2024-12-09 10:39:19.913075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.614 [2024-12-09 10:39:19.913511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-12-09 10:39:19.913539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.614 [2024-12-09 10:39:19.913555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.614 [2024-12-09 10:39:19.913786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.614 [2024-12-09 10:39:19.914003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.614 [2024-12-09 10:39:19.914022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.614 [2024-12-09 10:39:19.914035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.614 [2024-12-09 10:39:19.914046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.614 [2024-12-09 10:39:19.926341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.614 [2024-12-09 10:39:19.926661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-12-09 10:39:19.926689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.614 [2024-12-09 10:39:19.926710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.614 [2024-12-09 10:39:19.926936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.614 [2024-12-09 10:39:19.927179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.614 [2024-12-09 10:39:19.927200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.614 [2024-12-09 10:39:19.927212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.614 [2024-12-09 10:39:19.927224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.614 [2024-12-09 10:39:19.939697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.614 [2024-12-09 10:39:19.940047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-12-09 10:39:19.940075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.614 [2024-12-09 10:39:19.940091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.614 [2024-12-09 10:39:19.940319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.614 [2024-12-09 10:39:19.940577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.614 [2024-12-09 10:39:19.940596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.614 [2024-12-09 10:39:19.940608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.614 [2024-12-09 10:39:19.940619] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.614 [2024-12-09 10:39:19.953034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.614 [2024-12-09 10:39:19.953415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-12-09 10:39:19.953444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.614 [2024-12-09 10:39:19.953460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.614 [2024-12-09 10:39:19.953706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.614 [2024-12-09 10:39:19.953907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.614 [2024-12-09 10:39:19.953926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.614 [2024-12-09 10:39:19.953938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.614 [2024-12-09 10:39:19.953949] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.614 [2024-12-09 10:39:19.966472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.614 [2024-12-09 10:39:19.966788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-12-09 10:39:19.966814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.614 [2024-12-09 10:39:19.966829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.614 [2024-12-09 10:39:19.967033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.615 [2024-12-09 10:39:19.967285] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.615 [2024-12-09 10:39:19.967306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.615 [2024-12-09 10:39:19.967319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.615 [2024-12-09 10:39:19.967331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.615 [2024-12-09 10:39:19.979763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.615 [2024-12-09 10:39:19.980144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-12-09 10:39:19.980173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.615 [2024-12-09 10:39:19.980189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.615 [2024-12-09 10:39:19.980435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.615 [2024-12-09 10:39:19.980653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.615 [2024-12-09 10:39:19.980672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.615 [2024-12-09 10:39:19.980684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.615 [2024-12-09 10:39:19.980696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.615 [2024-12-09 10:39:19.993033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.615 [2024-12-09 10:39:19.993488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-12-09 10:39:19.993517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.615 [2024-12-09 10:39:19.993533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.615 [2024-12-09 10:39:19.993765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.615 [2024-12-09 10:39:19.993982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.615 [2024-12-09 10:39:19.994001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.615 [2024-12-09 10:39:19.994013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.615 [2024-12-09 10:39:19.994025] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.615 [2024-12-09 10:39:20.006684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.615 [2024-12-09 10:39:20.007034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-12-09 10:39:20.007064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.615 [2024-12-09 10:39:20.007081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.615 [2024-12-09 10:39:20.007308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.615 [2024-12-09 10:39:20.007557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.615 [2024-12-09 10:39:20.007577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.615 [2024-12-09 10:39:20.007596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.615 [2024-12-09 10:39:20.007609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.615 [2024-12-09 10:39:20.020323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.615 [2024-12-09 10:39:20.020776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-12-09 10:39:20.020808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.615 [2024-12-09 10:39:20.020825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.615 [2024-12-09 10:39:20.021059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.615 [2024-12-09 10:39:20.021317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.615 [2024-12-09 10:39:20.021339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.615 [2024-12-09 10:39:20.021353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.615 [2024-12-09 10:39:20.021365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.615 [2024-12-09 10:39:20.033846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.615 [2024-12-09 10:39:20.034271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-12-09 10:39:20.034301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.615 [2024-12-09 10:39:20.034318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.615 [2024-12-09 10:39:20.034551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.615 [2024-12-09 10:39:20.034815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.615 [2024-12-09 10:39:20.034837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.615 [2024-12-09 10:39:20.034851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.615 [2024-12-09 10:39:20.034864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.615 [2024-12-09 10:39:20.047370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.615 [2024-12-09 10:39:20.047778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-12-09 10:39:20.047806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.615 [2024-12-09 10:39:20.047822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.615 [2024-12-09 10:39:20.048071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.615 [2024-12-09 10:39:20.048383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.615 [2024-12-09 10:39:20.048416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.615 [2024-12-09 10:39:20.048465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.615 [2024-12-09 10:39:20.048486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.874 [2024-12-09 10:39:20.061093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.874 [2024-12-09 10:39:20.061501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.874 [2024-12-09 10:39:20.061533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.874 [2024-12-09 10:39:20.061550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.874 [2024-12-09 10:39:20.061781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.874 [2024-12-09 10:39:20.062000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.874 [2024-12-09 10:39:20.062019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.874 [2024-12-09 10:39:20.062031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.874 [2024-12-09 10:39:20.062043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.874 [2024-12-09 10:39:20.075053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.874 [2024-12-09 10:39:20.075556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.874 [2024-12-09 10:39:20.075594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.874 [2024-12-09 10:39:20.075614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.874 [2024-12-09 10:39:20.075833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.874 [2024-12-09 10:39:20.076069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.874 [2024-12-09 10:39:20.076090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.874 [2024-12-09 10:39:20.076103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.874 [2024-12-09 10:39:20.076116] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.875 [2024-12-09 10:39:20.088556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.875 [2024-12-09 10:39:20.088999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.875 [2024-12-09 10:39:20.089029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.875 [2024-12-09 10:39:20.089046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.875 [2024-12-09 10:39:20.089275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.875 [2024-12-09 10:39:20.089524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.875 [2024-12-09 10:39:20.089544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.875 [2024-12-09 10:39:20.089557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.875 [2024-12-09 10:39:20.089569] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.875 [2024-12-09 10:39:20.101997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.875 [2024-12-09 10:39:20.102379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.875 [2024-12-09 10:39:20.102408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.875 [2024-12-09 10:39:20.102430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.875 [2024-12-09 10:39:20.102677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.875 [2024-12-09 10:39:20.102880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.875 [2024-12-09 10:39:20.102899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.875 [2024-12-09 10:39:20.102912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.875 [2024-12-09 10:39:20.102924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.875 [2024-12-09 10:39:20.115451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.875 [2024-12-09 10:39:20.115851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.875 [2024-12-09 10:39:20.115894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.875 [2024-12-09 10:39:20.115911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.875 [2024-12-09 10:39:20.116168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.875 [2024-12-09 10:39:20.116392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.875 [2024-12-09 10:39:20.116413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.875 [2024-12-09 10:39:20.116427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.875 [2024-12-09 10:39:20.116440] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.875 [2024-12-09 10:39:20.128917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.875 [2024-12-09 10:39:20.129298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.875 [2024-12-09 10:39:20.129327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.875 [2024-12-09 10:39:20.129343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.875 [2024-12-09 10:39:20.129590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.875 [2024-12-09 10:39:20.129791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.875 [2024-12-09 10:39:20.129810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.875 [2024-12-09 10:39:20.129823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.875 [2024-12-09 10:39:20.129834] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.875 [2024-12-09 10:39:20.142470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.875 [2024-12-09 10:39:20.142828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.875 [2024-12-09 10:39:20.142856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.875 [2024-12-09 10:39:20.142871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.875 [2024-12-09 10:39:20.143096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.875 [2024-12-09 10:39:20.143356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.875 [2024-12-09 10:39:20.143378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.875 [2024-12-09 10:39:20.143391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.875 [2024-12-09 10:39:20.143403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.875 [2024-12-09 10:39:20.155847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.875 [2024-12-09 10:39:20.156222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.875 [2024-12-09 10:39:20.156251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.875 [2024-12-09 10:39:20.156268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.875 [2024-12-09 10:39:20.156500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.875 [2024-12-09 10:39:20.156725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.875 [2024-12-09 10:39:20.156759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.875 [2024-12-09 10:39:20.156771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.875 [2024-12-09 10:39:20.156783] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.875 [2024-12-09 10:39:20.169248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.875 [2024-12-09 10:39:20.169656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.875 [2024-12-09 10:39:20.169685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.875 [2024-12-09 10:39:20.169702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.875 [2024-12-09 10:39:20.169936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.875 [2024-12-09 10:39:20.170180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.875 [2024-12-09 10:39:20.170201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.875 [2024-12-09 10:39:20.170229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.875 [2024-12-09 10:39:20.170242] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.875 [2024-12-09 10:39:20.182687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.875 [2024-12-09 10:39:20.183127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.875 [2024-12-09 10:39:20.183163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.875 [2024-12-09 10:39:20.183180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.875 [2024-12-09 10:39:20.183425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.875 [2024-12-09 10:39:20.183655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.875 [2024-12-09 10:39:20.183675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.875 [2024-12-09 10:39:20.183693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.875 [2024-12-09 10:39:20.183706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.875 [2024-12-09 10:39:20.196028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.875 [2024-12-09 10:39:20.196453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.875 [2024-12-09 10:39:20.196483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.875 [2024-12-09 10:39:20.196498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.875 [2024-12-09 10:39:20.196733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.875 [2024-12-09 10:39:20.196954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.875 [2024-12-09 10:39:20.196973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.875 [2024-12-09 10:39:20.196985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.875 [2024-12-09 10:39:20.196997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.875 [2024-12-09 10:39:20.209521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.876 [2024-12-09 10:39:20.209885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.876 [2024-12-09 10:39:20.209914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.876 [2024-12-09 10:39:20.209930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.876 [2024-12-09 10:39:20.210171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.876 [2024-12-09 10:39:20.210401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.876 [2024-12-09 10:39:20.210422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.876 [2024-12-09 10:39:20.210449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.876 [2024-12-09 10:39:20.210461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.876 [2024-12-09 10:39:20.223036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.876 [2024-12-09 10:39:20.223397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.876 [2024-12-09 10:39:20.223427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.876 [2024-12-09 10:39:20.223459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.876 [2024-12-09 10:39:20.223704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.876 [2024-12-09 10:39:20.223949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.876 [2024-12-09 10:39:20.223968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.876 [2024-12-09 10:39:20.223980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.876 [2024-12-09 10:39:20.223992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.876 [2024-12-09 10:39:20.236589] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.876 [2024-12-09 10:39:20.236973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.876 [2024-12-09 10:39:20.237001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.876 [2024-12-09 10:39:20.237017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.876 [2024-12-09 10:39:20.237260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.876 [2024-12-09 10:39:20.237504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.876 [2024-12-09 10:39:20.237523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.876 [2024-12-09 10:39:20.237535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.876 [2024-12-09 10:39:20.237546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.876 [2024-12-09 10:39:20.250104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.876 [2024-12-09 10:39:20.250644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.876 [2024-12-09 10:39:20.250686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.876 [2024-12-09 10:39:20.250702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.876 [2024-12-09 10:39:20.250948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.876 [2024-12-09 10:39:20.251201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.876 [2024-12-09 10:39:20.251223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.876 [2024-12-09 10:39:20.251236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.876 [2024-12-09 10:39:20.251248] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.876 [2024-12-09 10:39:20.263787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.876 [2024-12-09 10:39:20.264250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.876 [2024-12-09 10:39:20.264280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.876 [2024-12-09 10:39:20.264296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.876 [2024-12-09 10:39:20.264542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.876 [2024-12-09 10:39:20.264761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.876 [2024-12-09 10:39:20.264796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.876 [2024-12-09 10:39:20.264809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.876 [2024-12-09 10:39:20.264821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.876 [2024-12-09 10:39:20.277445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.876 [2024-12-09 10:39:20.277785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.876 [2024-12-09 10:39:20.277813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.876 [2024-12-09 10:39:20.277834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.876 [2024-12-09 10:39:20.278067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.876 [2024-12-09 10:39:20.278326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.876 [2024-12-09 10:39:20.278349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.876 [2024-12-09 10:39:20.278363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.876 [2024-12-09 10:39:20.278376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.876 [2024-12-09 10:39:20.291236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.876 [2024-12-09 10:39:20.291650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.876 [2024-12-09 10:39:20.291679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.876 [2024-12-09 10:39:20.291695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.876 [2024-12-09 10:39:20.291926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.876 [2024-12-09 10:39:20.292174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.876 [2024-12-09 10:39:20.292196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.876 [2024-12-09 10:39:20.292211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.876 [2024-12-09 10:39:20.292224] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:47.876 [2024-12-09 10:39:20.304835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:47.876 [2024-12-09 10:39:20.305230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.876 [2024-12-09 10:39:20.305259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:47.876 [2024-12-09 10:39:20.305275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:47.876 [2024-12-09 10:39:20.305507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:47.876 [2024-12-09 10:39:20.305723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:47.876 [2024-12-09 10:39:20.305742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:47.876 [2024-12-09 10:39:20.305754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:47.876 [2024-12-09 10:39:20.305766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.135 [2024-12-09 10:39:20.318577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.135 [2024-12-09 10:39:20.318978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.136 [2024-12-09 10:39:20.319013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.136 [2024-12-09 10:39:20.319032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.136 [2024-12-09 10:39:20.319262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.136 [2024-12-09 10:39:20.319504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.136 [2024-12-09 10:39:20.319525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.136 [2024-12-09 10:39:20.319543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.136 [2024-12-09 10:39:20.319564] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.136 [2024-12-09 10:39:20.332076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.136 [2024-12-09 10:39:20.332447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.136 [2024-12-09 10:39:20.332477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.136 [2024-12-09 10:39:20.332494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.136 [2024-12-09 10:39:20.332725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.136 [2024-12-09 10:39:20.332944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.136 [2024-12-09 10:39:20.332963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.136 [2024-12-09 10:39:20.332975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.136 [2024-12-09 10:39:20.332987] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.136 [2024-12-09 10:39:20.345560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.136 [2024-12-09 10:39:20.345912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.136 [2024-12-09 10:39:20.345940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.136 [2024-12-09 10:39:20.345955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.136 [2024-12-09 10:39:20.346195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.136 [2024-12-09 10:39:20.346418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.136 [2024-12-09 10:39:20.346439] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.136 [2024-12-09 10:39:20.346453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.136 [2024-12-09 10:39:20.346481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.136 [2024-12-09 10:39:20.359046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.136 [2024-12-09 10:39:20.359483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.136 [2024-12-09 10:39:20.359516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.136 [2024-12-09 10:39:20.359549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.136 [2024-12-09 10:39:20.359782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.136 [2024-12-09 10:39:20.360020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.136 [2024-12-09 10:39:20.360039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.136 [2024-12-09 10:39:20.360057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.136 [2024-12-09 10:39:20.360069] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.136 [2024-12-09 10:39:20.372705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.136 [2024-12-09 10:39:20.373103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.136 [2024-12-09 10:39:20.373132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.136 [2024-12-09 10:39:20.373158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.136 [2024-12-09 10:39:20.373377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.136 [2024-12-09 10:39:20.373614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.136 [2024-12-09 10:39:20.373634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.136 [2024-12-09 10:39:20.373646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.136 [2024-12-09 10:39:20.373657] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.136 [2024-12-09 10:39:20.386283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.136 [2024-12-09 10:39:20.386770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.136 [2024-12-09 10:39:20.386799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.136 [2024-12-09 10:39:20.386814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.136 [2024-12-09 10:39:20.387061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.136 [2024-12-09 10:39:20.387333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.136 [2024-12-09 10:39:20.387356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.136 [2024-12-09 10:39:20.387370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.136 [2024-12-09 10:39:20.387382] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.136 [2024-12-09 10:39:20.399761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.136 [2024-12-09 10:39:20.400210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.136 [2024-12-09 10:39:20.400240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.136 [2024-12-09 10:39:20.400256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.136 [2024-12-09 10:39:20.400488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.136 [2024-12-09 10:39:20.400726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.136 [2024-12-09 10:39:20.400745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.136 [2024-12-09 10:39:20.400758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.136 [2024-12-09 10:39:20.400769] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.136 [2024-12-09 10:39:20.413446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.136 [2024-12-09 10:39:20.413825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.136 [2024-12-09 10:39:20.413853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.136 [2024-12-09 10:39:20.413869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.136 [2024-12-09 10:39:20.414102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.136 [2024-12-09 10:39:20.414340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.136 [2024-12-09 10:39:20.414363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.136 [2024-12-09 10:39:20.414376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.136 [2024-12-09 10:39:20.414387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.136 [2024-12-09 10:39:20.427043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.136 [2024-12-09 10:39:20.427422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.136 [2024-12-09 10:39:20.427452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.136 [2024-12-09 10:39:20.427468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.136 [2024-12-09 10:39:20.427701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.136 [2024-12-09 10:39:20.427930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.136 [2024-12-09 10:39:20.427950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.136 [2024-12-09 10:39:20.427963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.136 [2024-12-09 10:39:20.427975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.136 [2024-12-09 10:39:20.440635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.136 [2024-12-09 10:39:20.440982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.136 [2024-12-09 10:39:20.441011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.136 [2024-12-09 10:39:20.441027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.136 [2024-12-09 10:39:20.441271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.136 [2024-12-09 10:39:20.441500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.136 [2024-12-09 10:39:20.441520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.137 [2024-12-09 10:39:20.441532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.137 [2024-12-09 10:39:20.441543] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.137 [2024-12-09 10:39:20.454189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.137 [2024-12-09 10:39:20.454525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.137 [2024-12-09 10:39:20.454553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.137 [2024-12-09 10:39:20.454573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.137 [2024-12-09 10:39:20.454798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.137 [2024-12-09 10:39:20.455050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.137 [2024-12-09 10:39:20.455071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.137 [2024-12-09 10:39:20.455084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.137 [2024-12-09 10:39:20.455096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.137 [2024-12-09 10:39:20.467741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.137 [2024-12-09 10:39:20.468095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.137 [2024-12-09 10:39:20.468123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.137 [2024-12-09 10:39:20.468149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.137 [2024-12-09 10:39:20.468384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.137 [2024-12-09 10:39:20.468622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.137 [2024-12-09 10:39:20.468642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.137 [2024-12-09 10:39:20.468654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.137 [2024-12-09 10:39:20.468665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.137 [2024-12-09 10:39:20.481279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.137 [2024-12-09 10:39:20.481698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.137 [2024-12-09 10:39:20.481740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.137 [2024-12-09 10:39:20.481755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.137 [2024-12-09 10:39:20.482027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.137 [2024-12-09 10:39:20.482297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.137 [2024-12-09 10:39:20.482320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.137 [2024-12-09 10:39:20.482334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.137 [2024-12-09 10:39:20.482347] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.137 [2024-12-09 10:39:20.494895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.137 [2024-12-09 10:39:20.495259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.137 [2024-12-09 10:39:20.495288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.137 [2024-12-09 10:39:20.495304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.137 [2024-12-09 10:39:20.495521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.137 [2024-12-09 10:39:20.495760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.137 [2024-12-09 10:39:20.495787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.137 [2024-12-09 10:39:20.495800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.137 [2024-12-09 10:39:20.495812] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.137 [2024-12-09 10:39:20.508468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.137 [2024-12-09 10:39:20.508866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.137 [2024-12-09 10:39:20.508907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.137 [2024-12-09 10:39:20.508923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.137 [2024-12-09 10:39:20.509170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.137 [2024-12-09 10:39:20.509394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.137 [2024-12-09 10:39:20.509415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.137 [2024-12-09 10:39:20.509428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.137 [2024-12-09 10:39:20.509441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.137 [2024-12-09 10:39:20.522013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.137 [2024-12-09 10:39:20.522436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.137 [2024-12-09 10:39:20.522527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.137 [2024-12-09 10:39:20.522544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.137 [2024-12-09 10:39:20.522786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.137 [2024-12-09 10:39:20.523002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.137 [2024-12-09 10:39:20.523020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.137 [2024-12-09 10:39:20.523032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.137 [2024-12-09 10:39:20.523043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.137 [2024-12-09 10:39:20.535544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.137 [2024-12-09 10:39:20.535988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.137 [2024-12-09 10:39:20.536017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.137 [2024-12-09 10:39:20.536048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.137 [2024-12-09 10:39:20.536323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.137 [2024-12-09 10:39:20.536561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.137 [2024-12-09 10:39:20.536580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.137 [2024-12-09 10:39:20.536592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.137 [2024-12-09 10:39:20.536608] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.137 [2024-12-09 10:39:20.549067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.137 [2024-12-09 10:39:20.549445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.137 [2024-12-09 10:39:20.549474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.137 [2024-12-09 10:39:20.549490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.137 [2024-12-09 10:39:20.549721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.137 [2024-12-09 10:39:20.549957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.137 [2024-12-09 10:39:20.549978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.137 [2024-12-09 10:39:20.549993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.137 [2024-12-09 10:39:20.550005] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.137 [2024-12-09 10:39:20.562700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.137 [2024-12-09 10:39:20.563136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.137 [2024-12-09 10:39:20.563171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.137 [2024-12-09 10:39:20.563203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.137 [2024-12-09 10:39:20.563434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.137 [2024-12-09 10:39:20.563648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.137 [2024-12-09 10:39:20.563667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.137 [2024-12-09 10:39:20.563678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.137 [2024-12-09 10:39:20.563692] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.396 [2024-12-09 10:39:20.576702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.396 [2024-12-09 10:39:20.577186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.396 [2024-12-09 10:39:20.577218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.396 [2024-12-09 10:39:20.577235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.396 [2024-12-09 10:39:20.577468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.396 [2024-12-09 10:39:20.577683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.396 [2024-12-09 10:39:20.577703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.396 [2024-12-09 10:39:20.577716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.396 [2024-12-09 10:39:20.577727] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.396 [2024-12-09 10:39:20.590110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.396 [2024-12-09 10:39:20.590590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.396 [2024-12-09 10:39:20.590645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.396 [2024-12-09 10:39:20.590662] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.396 [2024-12-09 10:39:20.590927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.396 [2024-12-09 10:39:20.591124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.397 [2024-12-09 10:39:20.591169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.397 [2024-12-09 10:39:20.591184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.397 [2024-12-09 10:39:20.591196] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.397 [2024-12-09 10:39:20.603607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.397 [2024-12-09 10:39:20.603996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.397 [2024-12-09 10:39:20.604041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.397 [2024-12-09 10:39:20.604057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.397 [2024-12-09 10:39:20.604313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.397 [2024-12-09 10:39:20.604570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.397 [2024-12-09 10:39:20.604590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.397 [2024-12-09 10:39:20.604603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.397 [2024-12-09 10:39:20.604615] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.397 [2024-12-09 10:39:20.617029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.397 [2024-12-09 10:39:20.617547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.397 [2024-12-09 10:39:20.617601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.397 [2024-12-09 10:39:20.617616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.397 [2024-12-09 10:39:20.617873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.397 [2024-12-09 10:39:20.618101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.397 [2024-12-09 10:39:20.618135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.397 [2024-12-09 10:39:20.618160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.397 [2024-12-09 10:39:20.618173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.397 [2024-12-09 10:39:20.630537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.397 [2024-12-09 10:39:20.630963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.397 [2024-12-09 10:39:20.630991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.397 [2024-12-09 10:39:20.631007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.397 [2024-12-09 10:39:20.631283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.397 [2024-12-09 10:39:20.631507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.397 [2024-12-09 10:39:20.631526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.397 [2024-12-09 10:39:20.631538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.397 [2024-12-09 10:39:20.631549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.397 [2024-12-09 10:39:20.644041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.397 [2024-12-09 10:39:20.644449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.397 [2024-12-09 10:39:20.644493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.397 [2024-12-09 10:39:20.644509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.397 [2024-12-09 10:39:20.644746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.397 [2024-12-09 10:39:20.644989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.397 [2024-12-09 10:39:20.645008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.397 [2024-12-09 10:39:20.645021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.397 [2024-12-09 10:39:20.645033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.397 [2024-12-09 10:39:20.657691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.397 [2024-12-09 10:39:20.657994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.397 [2024-12-09 10:39:20.658034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.397 [2024-12-09 10:39:20.658049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.397 [2024-12-09 10:39:20.658294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.397 [2024-12-09 10:39:20.658545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.397 [2024-12-09 10:39:20.658564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.397 [2024-12-09 10:39:20.658576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.397 [2024-12-09 10:39:20.658587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.397 5572.25 IOPS, 21.77 MiB/s [2024-12-09T09:39:20.838Z] [2024-12-09 10:39:20.671272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.397 [2024-12-09 10:39:20.671669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.397 [2024-12-09 10:39:20.671712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.397 [2024-12-09 10:39:20.671728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.397 [2024-12-09 10:39:20.671954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.397 [2024-12-09 10:39:20.672193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.397 [2024-12-09 10:39:20.672218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.397 [2024-12-09 10:39:20.672232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.397 [2024-12-09 10:39:20.672243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.397 [2024-12-09 10:39:20.684749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.397 [2024-12-09 10:39:20.685130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.397 [2024-12-09 10:39:20.685182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.397 [2024-12-09 10:39:20.685198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.397 [2024-12-09 10:39:20.685415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.397 [2024-12-09 10:39:20.685667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.397 [2024-12-09 10:39:20.685686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.397 [2024-12-09 10:39:20.685697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.397 [2024-12-09 10:39:20.685708] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.397 [2024-12-09 10:39:20.698208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.397 [2024-12-09 10:39:20.698628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.397 [2024-12-09 10:39:20.698671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.398 [2024-12-09 10:39:20.698686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.398 [2024-12-09 10:39:20.698954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.398 [2024-12-09 10:39:20.699179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.398 [2024-12-09 10:39:20.699200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.398 [2024-12-09 10:39:20.699212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.398 [2024-12-09 10:39:20.699223] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.398 [2024-12-09 10:39:20.711736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.398 [2024-12-09 10:39:20.712104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.398 [2024-12-09 10:39:20.712157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.398 [2024-12-09 10:39:20.712177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.398 [2024-12-09 10:39:20.712409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.398 [2024-12-09 10:39:20.712623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.398 [2024-12-09 10:39:20.712641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.398 [2024-12-09 10:39:20.712654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.398 [2024-12-09 10:39:20.712671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.398 [2024-12-09 10:39:20.725025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.398 [2024-12-09 10:39:20.725442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.398 [2024-12-09 10:39:20.725484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.398 [2024-12-09 10:39:20.725500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.398 [2024-12-09 10:39:20.725751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.398 [2024-12-09 10:39:20.725947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.398 [2024-12-09 10:39:20.725966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.398 [2024-12-09 10:39:20.725978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.398 [2024-12-09 10:39:20.725990] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.398 [2024-12-09 10:39:20.738362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.398 [2024-12-09 10:39:20.738748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.398 [2024-12-09 10:39:20.738791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.398 [2024-12-09 10:39:20.738807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.398 [2024-12-09 10:39:20.739076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.398 [2024-12-09 10:39:20.739310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.398 [2024-12-09 10:39:20.739332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.398 [2024-12-09 10:39:20.739345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.398 [2024-12-09 10:39:20.739357] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.398 [2024-12-09 10:39:20.751545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.398 [2024-12-09 10:39:20.751910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.398 [2024-12-09 10:39:20.751953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.398 [2024-12-09 10:39:20.751969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.398 [2024-12-09 10:39:20.752253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.398 [2024-12-09 10:39:20.752455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.398 [2024-12-09 10:39:20.752474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.398 [2024-12-09 10:39:20.752486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.398 [2024-12-09 10:39:20.752498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.398 [2024-12-09 10:39:20.764682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.398 [2024-12-09 10:39:20.765118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.398 [2024-12-09 10:39:20.765172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.398 [2024-12-09 10:39:20.765192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.398 [2024-12-09 10:39:20.765451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.398 [2024-12-09 10:39:20.765664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.398 [2024-12-09 10:39:20.765683] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.398 [2024-12-09 10:39:20.765696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.398 [2024-12-09 10:39:20.765707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.398 [2024-12-09 10:39:20.777924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.398 [2024-12-09 10:39:20.778297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.398 [2024-12-09 10:39:20.778326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.398 [2024-12-09 10:39:20.778342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.398 [2024-12-09 10:39:20.778599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.398 [2024-12-09 10:39:20.778796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.398 [2024-12-09 10:39:20.778814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.398 [2024-12-09 10:39:20.778826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.398 [2024-12-09 10:39:20.778838] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.398 [2024-12-09 10:39:20.791233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.398 [2024-12-09 10:39:20.791562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.398 [2024-12-09 10:39:20.791604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.398 [2024-12-09 10:39:20.791619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.398 [2024-12-09 10:39:20.791836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.398 [2024-12-09 10:39:20.792049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.398 [2024-12-09 10:39:20.792068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.398 [2024-12-09 10:39:20.792080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.399 [2024-12-09 10:39:20.792092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.399 [2024-12-09 10:39:20.805069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.399 [2024-12-09 10:39:20.805475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.399 [2024-12-09 10:39:20.805531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.399 [2024-12-09 10:39:20.805548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.399 [2024-12-09 10:39:20.805811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.399 [2024-12-09 10:39:20.806035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.399 [2024-12-09 10:39:20.806055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.399 [2024-12-09 10:39:20.806068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.399 [2024-12-09 10:39:20.806080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.399 [2024-12-09 10:39:20.818360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.399 [2024-12-09 10:39:20.818806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.399 [2024-12-09 10:39:20.818860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.399 [2024-12-09 10:39:20.818876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.399 [2024-12-09 10:39:20.819131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.399 [2024-12-09 10:39:20.819382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.399 [2024-12-09 10:39:20.819402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.399 [2024-12-09 10:39:20.819415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.399 [2024-12-09 10:39:20.819442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.399 [2024-12-09 10:39:20.831641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.399 [2024-12-09 10:39:20.832010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.399 [2024-12-09 10:39:20.832052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.399 [2024-12-09 10:39:20.832067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.399 [2024-12-09 10:39:20.832348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.399 [2024-12-09 10:39:20.832578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.399 [2024-12-09 10:39:20.832599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.399 [2024-12-09 10:39:20.832612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.399 [2024-12-09 10:39:20.832623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.658 [2024-12-09 10:39:20.845214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.658 [2024-12-09 10:39:20.845633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.658 [2024-12-09 10:39:20.845662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.658 [2024-12-09 10:39:20.845677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.658 [2024-12-09 10:39:20.845910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.658 [2024-12-09 10:39:20.846136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.658 [2024-12-09 10:39:20.846172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.658 [2024-12-09 10:39:20.846189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.658 [2024-12-09 10:39:20.846202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.658 [2024-12-09 10:39:20.858320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.658 [2024-12-09 10:39:20.858691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.658 [2024-12-09 10:39:20.858735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.658 [2024-12-09 10:39:20.858751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.658 [2024-12-09 10:39:20.859020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.658 [2024-12-09 10:39:20.859245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.658 [2024-12-09 10:39:20.859267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.658 [2024-12-09 10:39:20.859280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.658 [2024-12-09 10:39:20.859292] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.658 [2024-12-09 10:39:20.871537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.658 [2024-12-09 10:39:20.871907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.658 [2024-12-09 10:39:20.871935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.658 [2024-12-09 10:39:20.871950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.658 [2024-12-09 10:39:20.872199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.658 [2024-12-09 10:39:20.872402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.658 [2024-12-09 10:39:20.872421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.658 [2024-12-09 10:39:20.872433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.658 [2024-12-09 10:39:20.872444] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.658 [2024-12-09 10:39:20.884639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.658 [2024-12-09 10:39:20.884970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.658 [2024-12-09 10:39:20.884997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.658 [2024-12-09 10:39:20.885012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.658 [2024-12-09 10:39:20.885260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.658 [2024-12-09 10:39:20.885493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.658 [2024-12-09 10:39:20.885512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.658 [2024-12-09 10:39:20.885524] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.658 [2024-12-09 10:39:20.885540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.658 [2024-12-09 10:39:20.897830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.658 [2024-12-09 10:39:20.898221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.658 [2024-12-09 10:39:20.898264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.658 [2024-12-09 10:39:20.898280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.658 [2024-12-09 10:39:20.898505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.658 [2024-12-09 10:39:20.898717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.658 [2024-12-09 10:39:20.898736] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.658 [2024-12-09 10:39:20.898748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.658 [2024-12-09 10:39:20.898759] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.658 [2024-12-09 10:39:20.910974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.658 [2024-12-09 10:39:20.911364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.658 [2024-12-09 10:39:20.911406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.658 [2024-12-09 10:39:20.911422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.658 [2024-12-09 10:39:20.911647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.658 [2024-12-09 10:39:20.911861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.658 [2024-12-09 10:39:20.911880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.658 [2024-12-09 10:39:20.911891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.658 [2024-12-09 10:39:20.911903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.658 [2024-12-09 10:39:20.924024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.658 [2024-12-09 10:39:20.924474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.658 [2024-12-09 10:39:20.924516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.658 [2024-12-09 10:39:20.924533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.658 [2024-12-09 10:39:20.924778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.658 [2024-12-09 10:39:20.924974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.658 [2024-12-09 10:39:20.924992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.658 [2024-12-09 10:39:20.925004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.658 [2024-12-09 10:39:20.925015] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.658 [2024-12-09 10:39:20.937248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.659 [2024-12-09 10:39:20.937612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.659 [2024-12-09 10:39:20.937645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.659 [2024-12-09 10:39:20.937661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.659 [2024-12-09 10:39:20.937898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.659 [2024-12-09 10:39:20.938109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.659 [2024-12-09 10:39:20.938127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.659 [2024-12-09 10:39:20.938148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.659 [2024-12-09 10:39:20.938177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.659 [2024-12-09 10:39:20.950350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.659 [2024-12-09 10:39:20.950717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.659 [2024-12-09 10:39:20.950760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.659 [2024-12-09 10:39:20.950776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.659 [2024-12-09 10:39:20.951047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.659 [2024-12-09 10:39:20.951289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.659 [2024-12-09 10:39:20.951311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.659 [2024-12-09 10:39:20.951324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.659 [2024-12-09 10:39:20.951335] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.659 [2024-12-09 10:39:20.963521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.659 [2024-12-09 10:39:20.963903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.659 [2024-12-09 10:39:20.963945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.659 [2024-12-09 10:39:20.963960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.659 [2024-12-09 10:39:20.964197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.659 [2024-12-09 10:39:20.964416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.659 [2024-12-09 10:39:20.964435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.659 [2024-12-09 10:39:20.964448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.659 [2024-12-09 10:39:20.964474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.659 [2024-12-09 10:39:20.976611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.659 [2024-12-09 10:39:20.976991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.659 [2024-12-09 10:39:20.977033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.659 [2024-12-09 10:39:20.977049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.659 [2024-12-09 10:39:20.977311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.659 [2024-12-09 10:39:20.977537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.659 [2024-12-09 10:39:20.977555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.659 [2024-12-09 10:39:20.977567] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.659 [2024-12-09 10:39:20.977578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.659 [2024-12-09 10:39:20.989726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.659 [2024-12-09 10:39:20.990070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.659 [2024-12-09 10:39:20.990151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.659 [2024-12-09 10:39:20.990169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.659 [2024-12-09 10:39:20.990420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.659 [2024-12-09 10:39:20.990617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.659 [2024-12-09 10:39:20.990635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.659 [2024-12-09 10:39:20.990647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.659 [2024-12-09 10:39:20.990658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.659 [2024-12-09 10:39:21.003006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.659 [2024-12-09 10:39:21.003438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.659 [2024-12-09 10:39:21.003481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.659 [2024-12-09 10:39:21.003497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.659 [2024-12-09 10:39:21.003745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.659 [2024-12-09 10:39:21.003942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.659 [2024-12-09 10:39:21.003960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.659 [2024-12-09 10:39:21.003972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.659 [2024-12-09 10:39:21.003983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.659 [2024-12-09 10:39:21.016036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.659 [2024-12-09 10:39:21.016391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.659 [2024-12-09 10:39:21.016419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.659 [2024-12-09 10:39:21.016434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.659 [2024-12-09 10:39:21.016659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.659 [2024-12-09 10:39:21.016872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.659 [2024-12-09 10:39:21.016896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.659 [2024-12-09 10:39:21.016908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.659 [2024-12-09 10:39:21.016919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.659 [2024-12-09 10:39:21.029310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.659 [2024-12-09 10:39:21.029701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.659 [2024-12-09 10:39:21.029745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.659 [2024-12-09 10:39:21.029761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.659 [2024-12-09 10:39:21.030032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.659 [2024-12-09 10:39:21.030275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.659 [2024-12-09 10:39:21.030296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.659 [2024-12-09 10:39:21.030310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.659 [2024-12-09 10:39:21.030322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.659 [2024-12-09 10:39:21.042385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.659 [2024-12-09 10:39:21.042800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.659 [2024-12-09 10:39:21.042842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.659 [2024-12-09 10:39:21.042857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.659 [2024-12-09 10:39:21.043095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.659 [2024-12-09 10:39:21.043346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.659 [2024-12-09 10:39:21.043367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.659 [2024-12-09 10:39:21.043381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.659 [2024-12-09 10:39:21.043392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.659 [2024-12-09 10:39:21.055463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.659 [2024-12-09 10:39:21.055914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.660 [2024-12-09 10:39:21.055966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.660 [2024-12-09 10:39:21.055982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.660 [2024-12-09 10:39:21.056272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.660 [2024-12-09 10:39:21.056523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.660 [2024-12-09 10:39:21.056543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.660 [2024-12-09 10:39:21.056555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.660 [2024-12-09 10:39:21.056567] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.660 [2024-12-09 10:39:21.068871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.660 [2024-12-09 10:39:21.069233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.660 [2024-12-09 10:39:21.069264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.660 [2024-12-09 10:39:21.069283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.660 [2024-12-09 10:39:21.069526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.660 [2024-12-09 10:39:21.069739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.660 [2024-12-09 10:39:21.069758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.660 [2024-12-09 10:39:21.069770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.660 [2024-12-09 10:39:21.069781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.660 [2024-12-09 10:39:21.082172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.660 [2024-12-09 10:39:21.082565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.660 [2024-12-09 10:39:21.082592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.660 [2024-12-09 10:39:21.082607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.660 [2024-12-09 10:39:21.082832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.660 [2024-12-09 10:39:21.083044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.660 [2024-12-09 10:39:21.083062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.660 [2024-12-09 10:39:21.083075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.660 [2024-12-09 10:39:21.083085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.660 [2024-12-09 10:39:21.095645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.660 [2024-12-09 10:39:21.096045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.660 [2024-12-09 10:39:21.096091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.660 [2024-12-09 10:39:21.096118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.660 [2024-12-09 10:39:21.096462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.660 [2024-12-09 10:39:21.096703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.660 [2024-12-09 10:39:21.096724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.660 [2024-12-09 10:39:21.096738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.660 [2024-12-09 10:39:21.096751] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.919 [2024-12-09 10:39:21.108850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.919 [2024-12-09 10:39:21.109183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.919 [2024-12-09 10:39:21.109218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.919 [2024-12-09 10:39:21.109235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.919 [2024-12-09 10:39:21.109460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.919 [2024-12-09 10:39:21.109672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.919 [2024-12-09 10:39:21.109691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.919 [2024-12-09 10:39:21.109703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.919 [2024-12-09 10:39:21.109714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.919 [2024-12-09 10:39:21.122070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.919 [2024-12-09 10:39:21.122594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.919 [2024-12-09 10:39:21.122623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.919 [2024-12-09 10:39:21.122639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.919 [2024-12-09 10:39:21.122875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.919 [2024-12-09 10:39:21.123088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.919 [2024-12-09 10:39:21.123107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.919 [2024-12-09 10:39:21.123119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.919 [2024-12-09 10:39:21.123157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.919 [2024-12-09 10:39:21.135162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.919 [2024-12-09 10:39:21.135588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.919 [2024-12-09 10:39:21.135616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.919 [2024-12-09 10:39:21.135632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.919 [2024-12-09 10:39:21.135874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.919 [2024-12-09 10:39:21.136071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.919 [2024-12-09 10:39:21.136090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.919 [2024-12-09 10:39:21.136102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.919 [2024-12-09 10:39:21.136113] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.919 [2024-12-09 10:39:21.148347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.919 [2024-12-09 10:39:21.148777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.919 [2024-12-09 10:39:21.148805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.919 [2024-12-09 10:39:21.148820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.919 [2024-12-09 10:39:21.149066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.919 [2024-12-09 10:39:21.149312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.919 [2024-12-09 10:39:21.149333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.919 [2024-12-09 10:39:21.149347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.919 [2024-12-09 10:39:21.149358] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.919 [2024-12-09 10:39:21.161400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.919 [2024-12-09 10:39:21.161738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.919 [2024-12-09 10:39:21.161766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.919 [2024-12-09 10:39:21.161782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.919 [2024-12-09 10:39:21.162008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.919 [2024-12-09 10:39:21.162252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.919 [2024-12-09 10:39:21.162273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.919 [2024-12-09 10:39:21.162286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.919 [2024-12-09 10:39:21.162298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.919 [2024-12-09 10:39:21.174445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.919 [2024-12-09 10:39:21.174808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.919 [2024-12-09 10:39:21.174836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.919 [2024-12-09 10:39:21.174851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.919 [2024-12-09 10:39:21.175088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.919 [2024-12-09 10:39:21.175332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.919 [2024-12-09 10:39:21.175353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.919 [2024-12-09 10:39:21.175366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.919 [2024-12-09 10:39:21.175377] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.919 [2024-12-09 10:39:21.187554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.919 [2024-12-09 10:39:21.187985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.920 [2024-12-09 10:39:21.188026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.920 [2024-12-09 10:39:21.188043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.920 [2024-12-09 10:39:21.188286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.920 [2024-12-09 10:39:21.188539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.920 [2024-12-09 10:39:21.188558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.920 [2024-12-09 10:39:21.188575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.920 [2024-12-09 10:39:21.188587] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.920 [2024-12-09 10:39:21.200768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.920 [2024-12-09 10:39:21.201202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.920 [2024-12-09 10:39:21.201231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.920 [2024-12-09 10:39:21.201247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.920 [2024-12-09 10:39:21.201490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.920 [2024-12-09 10:39:21.201686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.920 [2024-12-09 10:39:21.201705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.920 [2024-12-09 10:39:21.201717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.920 [2024-12-09 10:39:21.201728] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.920 [2024-12-09 10:39:21.213854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.920 [2024-12-09 10:39:21.214281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.920 [2024-12-09 10:39:21.214309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.920 [2024-12-09 10:39:21.214324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.920 [2024-12-09 10:39:21.214563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.920 [2024-12-09 10:39:21.214775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.920 [2024-12-09 10:39:21.214793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.920 [2024-12-09 10:39:21.214805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.920 [2024-12-09 10:39:21.214816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.920 [2024-12-09 10:39:21.226873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.920 [2024-12-09 10:39:21.227239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.920 [2024-12-09 10:39:21.227267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.920 [2024-12-09 10:39:21.227283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.920 [2024-12-09 10:39:21.227522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.920 [2024-12-09 10:39:21.227735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.920 [2024-12-09 10:39:21.227753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.920 [2024-12-09 10:39:21.227765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.920 [2024-12-09 10:39:21.227776] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.920 [2024-12-09 10:39:21.240094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.920 [2024-12-09 10:39:21.240478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.920 [2024-12-09 10:39:21.240522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.920 [2024-12-09 10:39:21.240537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.920 [2024-12-09 10:39:21.240807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.920 [2024-12-09 10:39:21.241004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.920 [2024-12-09 10:39:21.241022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.920 [2024-12-09 10:39:21.241034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.920 [2024-12-09 10:39:21.241045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.920 [2024-12-09 10:39:21.253159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.920 [2024-12-09 10:39:21.253656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.920 [2024-12-09 10:39:21.253699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.920 [2024-12-09 10:39:21.253715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.920 [2024-12-09 10:39:21.253952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.920 [2024-12-09 10:39:21.254191] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.920 [2024-12-09 10:39:21.254211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.920 [2024-12-09 10:39:21.254223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.920 [2024-12-09 10:39:21.254235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.920 [2024-12-09 10:39:21.266201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.920 [2024-12-09 10:39:21.266594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.920 [2024-12-09 10:39:21.266622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.920 [2024-12-09 10:39:21.266638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.920 [2024-12-09 10:39:21.266861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.920 [2024-12-09 10:39:21.267074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.920 [2024-12-09 10:39:21.267092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.920 [2024-12-09 10:39:21.267104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.920 [2024-12-09 10:39:21.267115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.920 [2024-12-09 10:39:21.279381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.920 [2024-12-09 10:39:21.279844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.920 [2024-12-09 10:39:21.279902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.920 [2024-12-09 10:39:21.279918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.920 [2024-12-09 10:39:21.280188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.920 [2024-12-09 10:39:21.280397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.920 [2024-12-09 10:39:21.280417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.920 [2024-12-09 10:39:21.280430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.920 [2024-12-09 10:39:21.280442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.920 [2024-12-09 10:39:21.292415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.920 [2024-12-09 10:39:21.292781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.920 [2024-12-09 10:39:21.292823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.920 [2024-12-09 10:39:21.292837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.920 [2024-12-09 10:39:21.293089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.920 [2024-12-09 10:39:21.293334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.920 [2024-12-09 10:39:21.293356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.920 [2024-12-09 10:39:21.293369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.920 [2024-12-09 10:39:21.293381] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.920 [2024-12-09 10:39:21.305528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.920 [2024-12-09 10:39:21.305888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.920 [2024-12-09 10:39:21.305917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.920 [2024-12-09 10:39:21.305932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.920 [2024-12-09 10:39:21.306176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.920 [2024-12-09 10:39:21.306414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.920 [2024-12-09 10:39:21.306449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.920 [2024-12-09 10:39:21.306462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.920 [2024-12-09 10:39:21.306475] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.920 [2024-12-09 10:39:21.318722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.920 [2024-12-09 10:39:21.319059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.921 [2024-12-09 10:39:21.319087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.921 [2024-12-09 10:39:21.319103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.921 [2024-12-09 10:39:21.319345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.921 [2024-12-09 10:39:21.319594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.921 [2024-12-09 10:39:21.319614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.921 [2024-12-09 10:39:21.319626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.921 [2024-12-09 10:39:21.319636] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.921 [2024-12-09 10:39:21.331792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.921 [2024-12-09 10:39:21.332218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.921 [2024-12-09 10:39:21.332261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.921 [2024-12-09 10:39:21.332277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.921 [2024-12-09 10:39:21.332519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.921 [2024-12-09 10:39:21.332730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.921 [2024-12-09 10:39:21.332748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.921 [2024-12-09 10:39:21.332760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.921 [2024-12-09 10:39:21.332771] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.921 [2024-12-09 10:39:21.344970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.921 [2024-12-09 10:39:21.345344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.921 [2024-12-09 10:39:21.345387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.921 [2024-12-09 10:39:21.345403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.921 [2024-12-09 10:39:21.345660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:48.921 [2024-12-09 10:39:21.345855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:48.921 [2024-12-09 10:39:21.345874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:48.921 [2024-12-09 10:39:21.345885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:48.921 [2024-12-09 10:39:21.345897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:48.921 [2024-12-09 10:39:21.358665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:48.921 [2024-12-09 10:39:21.359010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:48.921 [2024-12-09 10:39:21.359057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:48.921 [2024-12-09 10:39:21.359084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:48.921 [2024-12-09 10:39:21.359385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.180 [2024-12-09 10:39:21.359641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.180 [2024-12-09 10:39:21.359662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.180 [2024-12-09 10:39:21.359680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.180 [2024-12-09 10:39:21.359693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.180 [2024-12-09 10:39:21.371878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.180 [2024-12-09 10:39:21.372248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.180 [2024-12-09 10:39:21.372278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.180 [2024-12-09 10:39:21.372294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.180 [2024-12-09 10:39:21.372525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.180 [2024-12-09 10:39:21.372722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.180 [2024-12-09 10:39:21.372741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.180 [2024-12-09 10:39:21.372753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.180 [2024-12-09 10:39:21.372764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.180 [2024-12-09 10:39:21.384947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.180 [2024-12-09 10:39:21.385325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.180 [2024-12-09 10:39:21.385368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.180 [2024-12-09 10:39:21.385384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.180 [2024-12-09 10:39:21.385653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.180 [2024-12-09 10:39:21.385849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.180 [2024-12-09 10:39:21.385868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.180 [2024-12-09 10:39:21.385880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.180 [2024-12-09 10:39:21.385891] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.180 [2024-12-09 10:39:21.398127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.180 [2024-12-09 10:39:21.398511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.180 [2024-12-09 10:39:21.398538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.180 [2024-12-09 10:39:21.398554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.180 [2024-12-09 10:39:21.398778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.181 [2024-12-09 10:39:21.398989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.181 [2024-12-09 10:39:21.399008] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.181 [2024-12-09 10:39:21.399020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.181 [2024-12-09 10:39:21.399031] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.181 [2024-12-09 10:39:21.411319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.181 [2024-12-09 10:39:21.411745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.181 [2024-12-09 10:39:21.411788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.181 [2024-12-09 10:39:21.411805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.181 [2024-12-09 10:39:21.412047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.181 [2024-12-09 10:39:21.412292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.181 [2024-12-09 10:39:21.412313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.181 [2024-12-09 10:39:21.412326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.181 [2024-12-09 10:39:21.412339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.181 [2024-12-09 10:39:21.424550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.181 [2024-12-09 10:39:21.424933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.181 [2024-12-09 10:39:21.424974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.181 [2024-12-09 10:39:21.424990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.181 [2024-12-09 10:39:21.425245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.181 [2024-12-09 10:39:21.425475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.181 [2024-12-09 10:39:21.425509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.181 [2024-12-09 10:39:21.425522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.181 [2024-12-09 10:39:21.425533] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.181 [2024-12-09 10:39:21.437724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.181 [2024-12-09 10:39:21.438135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.181 [2024-12-09 10:39:21.438195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.181 [2024-12-09 10:39:21.438211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.181 [2024-12-09 10:39:21.438459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.181 [2024-12-09 10:39:21.438655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.181 [2024-12-09 10:39:21.438673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.181 [2024-12-09 10:39:21.438685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.181 [2024-12-09 10:39:21.438696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.181 [2024-12-09 10:39:21.450873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.181 [2024-12-09 10:39:21.451217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.181 [2024-12-09 10:39:21.451246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.181 [2024-12-09 10:39:21.451267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.181 [2024-12-09 10:39:21.451494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.181 [2024-12-09 10:39:21.451706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.181 [2024-12-09 10:39:21.451725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.181 [2024-12-09 10:39:21.451737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.181 [2024-12-09 10:39:21.451748] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.181 [2024-12-09 10:39:21.464075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.181 [2024-12-09 10:39:21.464474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.181 [2024-12-09 10:39:21.464516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.181 [2024-12-09 10:39:21.464531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.181 [2024-12-09 10:39:21.464779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.181 [2024-12-09 10:39:21.464975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.181 [2024-12-09 10:39:21.464993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.181 [2024-12-09 10:39:21.465005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.181 [2024-12-09 10:39:21.465016] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.181 [2024-12-09 10:39:21.477160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.181 [2024-12-09 10:39:21.477525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.181 [2024-12-09 10:39:21.477552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.181 [2024-12-09 10:39:21.477567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.181 [2024-12-09 10:39:21.477785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.181 [2024-12-09 10:39:21.477995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.181 [2024-12-09 10:39:21.478013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.181 [2024-12-09 10:39:21.478025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.181 [2024-12-09 10:39:21.478036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.181 [2024-12-09 10:39:21.490288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.181 [2024-12-09 10:39:21.490654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.181 [2024-12-09 10:39:21.490697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.181 [2024-12-09 10:39:21.490713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.181 [2024-12-09 10:39:21.490967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.181 [2024-12-09 10:39:21.491209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.181 [2024-12-09 10:39:21.491230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.181 [2024-12-09 10:39:21.491242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.181 [2024-12-09 10:39:21.491253] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.181 [2024-12-09 10:39:21.503409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.181 [2024-12-09 10:39:21.503744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.181 [2024-12-09 10:39:21.503772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.181 [2024-12-09 10:39:21.503787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.181 [2024-12-09 10:39:21.504014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.181 [2024-12-09 10:39:21.504256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.181 [2024-12-09 10:39:21.504277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.181 [2024-12-09 10:39:21.504289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.181 [2024-12-09 10:39:21.504301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.181 [2024-12-09 10:39:21.516828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.181 [2024-12-09 10:39:21.517220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.181 [2024-12-09 10:39:21.517250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.181 [2024-12-09 10:39:21.517266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.181 [2024-12-09 10:39:21.517501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.181 [2024-12-09 10:39:21.517737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.181 [2024-12-09 10:39:21.517756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.181 [2024-12-09 10:39:21.517769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.181 [2024-12-09 10:39:21.517781] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.181 [2024-12-09 10:39:21.530289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.181 [2024-12-09 10:39:21.530807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.181 [2024-12-09 10:39:21.530861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.181 [2024-12-09 10:39:21.530876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.181 [2024-12-09 10:39:21.531150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.182 [2024-12-09 10:39:21.531388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.182 [2024-12-09 10:39:21.531408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.182 [2024-12-09 10:39:21.531428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.182 [2024-12-09 10:39:21.531441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.182 [2024-12-09 10:39:21.543724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.182 [2024-12-09 10:39:21.544135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.182 [2024-12-09 10:39:21.544171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.182 [2024-12-09 10:39:21.544187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.182 [2024-12-09 10:39:21.544419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.182 [2024-12-09 10:39:21.544631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.182 [2024-12-09 10:39:21.544649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.182 [2024-12-09 10:39:21.544661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.182 [2024-12-09 10:39:21.544672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.182 [2024-12-09 10:39:21.557086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.182 [2024-12-09 10:39:21.557496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.182 [2024-12-09 10:39:21.557525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.182 [2024-12-09 10:39:21.557542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.182 [2024-12-09 10:39:21.557773] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.182 [2024-12-09 10:39:21.558038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.182 [2024-12-09 10:39:21.558059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.182 [2024-12-09 10:39:21.558072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.182 [2024-12-09 10:39:21.558085] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.182 [2024-12-09 10:39:21.570408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.182 [2024-12-09 10:39:21.570731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.182 [2024-12-09 10:39:21.570758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.182 [2024-12-09 10:39:21.570773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.182 [2024-12-09 10:39:21.570991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.182 [2024-12-09 10:39:21.571251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.182 [2024-12-09 10:39:21.571273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.182 [2024-12-09 10:39:21.571286] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.182 [2024-12-09 10:39:21.571298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.182 [2024-12-09 10:39:21.583632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.182 [2024-12-09 10:39:21.584044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.182 [2024-12-09 10:39:21.584095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.182 [2024-12-09 10:39:21.584110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.182 [2024-12-09 10:39:21.584376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.182 [2024-12-09 10:39:21.584591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.182 [2024-12-09 10:39:21.584610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.182 [2024-12-09 10:39:21.584622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.182 [2024-12-09 10:39:21.584633] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.182 [2024-12-09 10:39:21.596813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.182 [2024-12-09 10:39:21.597261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.182 [2024-12-09 10:39:21.597290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.182 [2024-12-09 10:39:21.597306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.182 [2024-12-09 10:39:21.597563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.182 [2024-12-09 10:39:21.597774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.182 [2024-12-09 10:39:21.597793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.182 [2024-12-09 10:39:21.597805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.182 [2024-12-09 10:39:21.597816] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.182 [2024-12-09 10:39:21.609940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.182 [2024-12-09 10:39:21.610331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.182 [2024-12-09 10:39:21.610373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.182 [2024-12-09 10:39:21.610389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.182 [2024-12-09 10:39:21.610617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.182 [2024-12-09 10:39:21.610828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.182 [2024-12-09 10:39:21.610847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.182 [2024-12-09 10:39:21.610859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.182 [2024-12-09 10:39:21.610869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.441 [2024-12-09 10:39:21.623420] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.441 [2024-12-09 10:39:21.623849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.441 [2024-12-09 10:39:21.623906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.441 [2024-12-09 10:39:21.623928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.441 [2024-12-09 10:39:21.624201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.441 [2024-12-09 10:39:21.624404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.441 [2024-12-09 10:39:21.624424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.441 [2024-12-09 10:39:21.624436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.441 [2024-12-09 10:39:21.624447] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.441 [2024-12-09 10:39:21.636593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.441 [2024-12-09 10:39:21.636938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.441 [2024-12-09 10:39:21.637010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.441 [2024-12-09 10:39:21.637027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.441 [2024-12-09 10:39:21.637300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.441 [2024-12-09 10:39:21.637538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.441 [2024-12-09 10:39:21.637557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.441 [2024-12-09 10:39:21.637569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.441 [2024-12-09 10:39:21.637580] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.441 [2024-12-09 10:39:21.649852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.441 [2024-12-09 10:39:21.650248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.441 [2024-12-09 10:39:21.650292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.441 [2024-12-09 10:39:21.650309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.441 [2024-12-09 10:39:21.650539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.441 [2024-12-09 10:39:21.650751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.441 [2024-12-09 10:39:21.650770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.441 [2024-12-09 10:39:21.650782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.441 [2024-12-09 10:39:21.650793] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.441 [2024-12-09 10:39:21.663066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.441 [2024-12-09 10:39:21.663504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.441 [2024-12-09 10:39:21.663548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.441 [2024-12-09 10:39:21.663565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.441 [2024-12-09 10:39:21.663808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.441 [2024-12-09 10:39:21.664009] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.441 [2024-12-09 10:39:21.664027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.441 [2024-12-09 10:39:21.664039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.441 [2024-12-09 10:39:21.664050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.441 4457.80 IOPS, 17.41 MiB/s [2024-12-09T09:39:21.882Z] [2024-12-09 10:39:21.676238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.441 [2024-12-09 10:39:21.676607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.441 [2024-12-09 10:39:21.676650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.441 [2024-12-09 10:39:21.676666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.441 [2024-12-09 10:39:21.676936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.441 [2024-12-09 10:39:21.677159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.441 [2024-12-09 10:39:21.677193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.441 [2024-12-09 10:39:21.677207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.441 [2024-12-09 10:39:21.677219] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.441 [2024-12-09 10:39:21.689432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.441 [2024-12-09 10:39:21.689811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.441 [2024-12-09 10:39:21.689852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.441 [2024-12-09 10:39:21.689868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.441 [2024-12-09 10:39:21.690092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.441 [2024-12-09 10:39:21.690338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.441 [2024-12-09 10:39:21.690359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.441 [2024-12-09 10:39:21.690372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.441 [2024-12-09 10:39:21.690383] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.441 [2024-12-09 10:39:21.702639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.441 [2024-12-09 10:39:21.702974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.441 [2024-12-09 10:39:21.703003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.441 [2024-12-09 10:39:21.703018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.441 [2024-12-09 10:39:21.703278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.441 [2024-12-09 10:39:21.703530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.441 [2024-12-09 10:39:21.703549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.441 [2024-12-09 10:39:21.703566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.441 [2024-12-09 10:39:21.703578] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.441 [2024-12-09 10:39:21.715838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.441 [2024-12-09 10:39:21.716245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.441 [2024-12-09 10:39:21.716274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.441 [2024-12-09 10:39:21.716292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.441 [2024-12-09 10:39:21.716517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.441 [2024-12-09 10:39:21.716729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.441 [2024-12-09 10:39:21.716747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.441 [2024-12-09 10:39:21.716759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.441 [2024-12-09 10:39:21.716770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.441 [2024-12-09 10:39:21.729205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.441 [2024-12-09 10:39:21.729690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.441 [2024-12-09 10:39:21.729732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.441 [2024-12-09 10:39:21.729749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.441 [2024-12-09 10:39:21.730003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.441 [2024-12-09 10:39:21.730240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.441 [2024-12-09 10:39:21.730260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.442 [2024-12-09 10:39:21.730272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.442 [2024-12-09 10:39:21.730284] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.442 [2024-12-09 10:39:21.742498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.442 [2024-12-09 10:39:21.742910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.442 [2024-12-09 10:39:21.742963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.442 [2024-12-09 10:39:21.742979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.442 [2024-12-09 10:39:21.743240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.442 [2024-12-09 10:39:21.743456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.442 [2024-12-09 10:39:21.743477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.442 [2024-12-09 10:39:21.743490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.442 [2024-12-09 10:39:21.743516] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.442 [2024-12-09 10:39:21.755693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.442 [2024-12-09 10:39:21.756053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.442 [2024-12-09 10:39:21.756080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.442 [2024-12-09 10:39:21.756094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.442 [2024-12-09 10:39:21.756354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.442 [2024-12-09 10:39:21.756568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.442 [2024-12-09 10:39:21.756588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.442 [2024-12-09 10:39:21.756600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.442 [2024-12-09 10:39:21.756610] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.442 [2024-12-09 10:39:21.769009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.442 [2024-12-09 10:39:21.769534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.442 [2024-12-09 10:39:21.769589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.442 [2024-12-09 10:39:21.769604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.442 [2024-12-09 10:39:21.769851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.442 [2024-12-09 10:39:21.770047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.442 [2024-12-09 10:39:21.770066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.442 [2024-12-09 10:39:21.770078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.442 [2024-12-09 10:39:21.770089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.442 [2024-12-09 10:39:21.782280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.442 [2024-12-09 10:39:21.782624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.442 [2024-12-09 10:39:21.782651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.442 [2024-12-09 10:39:21.782666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.442 [2024-12-09 10:39:21.782888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.442 [2024-12-09 10:39:21.783100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.442 [2024-12-09 10:39:21.783118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.442 [2024-12-09 10:39:21.783130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.442 [2024-12-09 10:39:21.783166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.442 [2024-12-09 10:39:21.795964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.442 [2024-12-09 10:39:21.796351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.442 [2024-12-09 10:39:21.796379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.442 [2024-12-09 10:39:21.796403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.442 [2024-12-09 10:39:21.796637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.442 [2024-12-09 10:39:21.796866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.442 [2024-12-09 10:39:21.796886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.442 [2024-12-09 10:39:21.796899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.442 [2024-12-09 10:39:21.796910] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.442 [2024-12-09 10:39:21.809631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.442 [2024-12-09 10:39:21.809983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.442 [2024-12-09 10:39:21.810026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.442 [2024-12-09 10:39:21.810042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.442 [2024-12-09 10:39:21.810269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.442 [2024-12-09 10:39:21.810508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.442 [2024-12-09 10:39:21.810544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.442 [2024-12-09 10:39:21.810557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.442 [2024-12-09 10:39:21.810568] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.442 [2024-12-09 10:39:21.823330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.442 [2024-12-09 10:39:21.823744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.442 [2024-12-09 10:39:21.823773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.442 [2024-12-09 10:39:21.823789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.442 [2024-12-09 10:39:21.824006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.442 [2024-12-09 10:39:21.824238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.442 [2024-12-09 10:39:21.824260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.442 [2024-12-09 10:39:21.824274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.442 [2024-12-09 10:39:21.824287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.442 [2024-12-09 10:39:21.836886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.442 [2024-12-09 10:39:21.837205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.442 [2024-12-09 10:39:21.837234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.442 [2024-12-09 10:39:21.837250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.442 [2024-12-09 10:39:21.837481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.442 [2024-12-09 10:39:21.837698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.442 [2024-12-09 10:39:21.837716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.442 [2024-12-09 10:39:21.837729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.442 [2024-12-09 10:39:21.837740] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.442 [2024-12-09 10:39:21.850248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.442 [2024-12-09 10:39:21.850738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.442 [2024-12-09 10:39:21.850780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.442 [2024-12-09 10:39:21.850797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.442 [2024-12-09 10:39:21.851051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.442 [2024-12-09 10:39:21.851311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.442 [2024-12-09 10:39:21.851333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.442 [2024-12-09 10:39:21.851347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.442 [2024-12-09 10:39:21.851360] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.442 [2024-12-09 10:39:21.863552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.442 [2024-12-09 10:39:21.863931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.442 [2024-12-09 10:39:21.863973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.442 [2024-12-09 10:39:21.863989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.442 [2024-12-09 10:39:21.864258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.442 [2024-12-09 10:39:21.864467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.443 [2024-12-09 10:39:21.864500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.443 [2024-12-09 10:39:21.864513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.443 [2024-12-09 10:39:21.864524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.443 [2024-12-09 10:39:21.876765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.443 [2024-12-09 10:39:21.877137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.443 [2024-12-09 10:39:21.877185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.443 [2024-12-09 10:39:21.877201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.443 [2024-12-09 10:39:21.877468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.443 [2024-12-09 10:39:21.877728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.443 [2024-12-09 10:39:21.877773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.443 [2024-12-09 10:39:21.877799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.443 [2024-12-09 10:39:21.877814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.702 [2024-12-09 10:39:21.889990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.702 [2024-12-09 10:39:21.890413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.702 [2024-12-09 10:39:21.890458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.702 [2024-12-09 10:39:21.890474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.702 [2024-12-09 10:39:21.890745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.702 [2024-12-09 10:39:21.890941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.702 [2024-12-09 10:39:21.890960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.702 [2024-12-09 10:39:21.890972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.702 [2024-12-09 10:39:21.890984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.702 [2024-12-09 10:39:21.903271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.702 [2024-12-09 10:39:21.903660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.702 [2024-12-09 10:39:21.903687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.702 [2024-12-09 10:39:21.903703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.702 [2024-12-09 10:39:21.903941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.702 [2024-12-09 10:39:21.904182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.702 [2024-12-09 10:39:21.904218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.702 [2024-12-09 10:39:21.904231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.702 [2024-12-09 10:39:21.904244] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.702 [2024-12-09 10:39:21.916440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.702 [2024-12-09 10:39:21.916838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.702 [2024-12-09 10:39:21.916867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.702 [2024-12-09 10:39:21.916882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.702 [2024-12-09 10:39:21.917109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.702 [2024-12-09 10:39:21.917361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.702 [2024-12-09 10:39:21.917382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.702 [2024-12-09 10:39:21.917395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.702 [2024-12-09 10:39:21.917407] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.702 [2024-12-09 10:39:21.929694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.702 [2024-12-09 10:39:21.930084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.702 [2024-12-09 10:39:21.930112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.702 [2024-12-09 10:39:21.930152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.702 [2024-12-09 10:39:21.930403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.702 [2024-12-09 10:39:21.930617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.702 [2024-12-09 10:39:21.930636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.702 [2024-12-09 10:39:21.930648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.702 [2024-12-09 10:39:21.930659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.702 [2024-12-09 10:39:21.942913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.702 [2024-12-09 10:39:21.943331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.702 [2024-12-09 10:39:21.943359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.702 [2024-12-09 10:39:21.943375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.702 [2024-12-09 10:39:21.943648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.702 [2024-12-09 10:39:21.943844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.702 [2024-12-09 10:39:21.943863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.702 [2024-12-09 10:39:21.943875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.702 [2024-12-09 10:39:21.943886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.702 [2024-12-09 10:39:21.956177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.702 [2024-12-09 10:39:21.956555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.702 [2024-12-09 10:39:21.956598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.702 [2024-12-09 10:39:21.956614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.702 [2024-12-09 10:39:21.956883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.702 [2024-12-09 10:39:21.957080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.702 [2024-12-09 10:39:21.957098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.702 [2024-12-09 10:39:21.957110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.702 [2024-12-09 10:39:21.957121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.702 [2024-12-09 10:39:21.969431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.702 [2024-12-09 10:39:21.969872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.702 [2024-12-09 10:39:21.969899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.702 [2024-12-09 10:39:21.969919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.702 [2024-12-09 10:39:21.970166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.702 [2024-12-09 10:39:21.970413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.702 [2024-12-09 10:39:21.970434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.702 [2024-12-09 10:39:21.970447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.702 [2024-12-09 10:39:21.970459] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.702 [2024-12-09 10:39:21.982696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.703 [2024-12-09 10:39:21.983000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.703 [2024-12-09 10:39:21.983046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.703 [2024-12-09 10:39:21.983067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.703 [2024-12-09 10:39:21.983344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.703 [2024-12-09 10:39:21.983558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.703 [2024-12-09 10:39:21.983577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.703 [2024-12-09 10:39:21.983589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.703 [2024-12-09 10:39:21.983600] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.703 [2024-12-09 10:39:21.995973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.703 [2024-12-09 10:39:21.996396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.703 [2024-12-09 10:39:21.996446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.703 [2024-12-09 10:39:21.996462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.703 [2024-12-09 10:39:21.996705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.703 [2024-12-09 10:39:21.996920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.703 [2024-12-09 10:39:21.996938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.703 [2024-12-09 10:39:21.996950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.703 [2024-12-09 10:39:21.996961] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.703 [2024-12-09 10:39:22.009300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.703 [2024-12-09 10:39:22.009638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.703 [2024-12-09 10:39:22.009665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.703 [2024-12-09 10:39:22.009681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.703 [2024-12-09 10:39:22.009906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.703 [2024-12-09 10:39:22.010117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.703 [2024-12-09 10:39:22.010149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.703 [2024-12-09 10:39:22.010179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.703 [2024-12-09 10:39:22.010191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.703 [2024-12-09 10:39:22.022497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.703 [2024-12-09 10:39:22.022894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.703 [2024-12-09 10:39:22.022921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.703 [2024-12-09 10:39:22.022937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.703 [2024-12-09 10:39:22.023172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.703 [2024-12-09 10:39:22.023381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.703 [2024-12-09 10:39:22.023401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.703 [2024-12-09 10:39:22.023414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.703 [2024-12-09 10:39:22.023425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.703 [2024-12-09 10:39:22.035518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.703 [2024-12-09 10:39:22.035878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.703 [2024-12-09 10:39:22.035905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.703 [2024-12-09 10:39:22.035921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.703 [2024-12-09 10:39:22.036169] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.703 [2024-12-09 10:39:22.036391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.703 [2024-12-09 10:39:22.036411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.703 [2024-12-09 10:39:22.036424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.703 [2024-12-09 10:39:22.036451] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.703 [2024-12-09 10:39:22.048784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.703 [2024-12-09 10:39:22.049242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.703 [2024-12-09 10:39:22.049271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.703 [2024-12-09 10:39:22.049287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.703 [2024-12-09 10:39:22.049523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.703 [2024-12-09 10:39:22.049735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.703 [2024-12-09 10:39:22.049754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.703 [2024-12-09 10:39:22.049766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.703 [2024-12-09 10:39:22.049782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.703 [2024-12-09 10:39:22.062097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.703 [2024-12-09 10:39:22.062496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.703 [2024-12-09 10:39:22.062530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.703 [2024-12-09 10:39:22.062546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.703 [2024-12-09 10:39:22.062779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.703 [2024-12-09 10:39:22.063008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.703 [2024-12-09 10:39:22.063029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.703 [2024-12-09 10:39:22.063058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.703 [2024-12-09 10:39:22.063071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.703 [2024-12-09 10:39:22.075466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.703 [2024-12-09 10:39:22.075909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.703 [2024-12-09 10:39:22.075952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.703 [2024-12-09 10:39:22.075968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.703 [2024-12-09 10:39:22.076218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.703 [2024-12-09 10:39:22.076428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.703 [2024-12-09 10:39:22.076462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.703 [2024-12-09 10:39:22.076475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.703 [2024-12-09 10:39:22.076486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.703 [2024-12-09 10:39:22.088634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.703 [2024-12-09 10:39:22.089129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.703 [2024-12-09 10:39:22.089187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.703 [2024-12-09 10:39:22.089203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.703 [2024-12-09 10:39:22.089446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.703 [2024-12-09 10:39:22.089642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.703 [2024-12-09 10:39:22.089660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.703 [2024-12-09 10:39:22.089672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.703 [2024-12-09 10:39:22.089683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.703 [2024-12-09 10:39:22.101692] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.703 [2024-12-09 10:39:22.102058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.703 [2024-12-09 10:39:22.102085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.703 [2024-12-09 10:39:22.102100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.703 [2024-12-09 10:39:22.102355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.703 [2024-12-09 10:39:22.102588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.703 [2024-12-09 10:39:22.102607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.703 [2024-12-09 10:39:22.102619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.703 [2024-12-09 10:39:22.102630] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.704 [2024-12-09 10:39:22.114865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.704 [2024-12-09 10:39:22.115236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.704 [2024-12-09 10:39:22.115265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.704 [2024-12-09 10:39:22.115282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.704 [2024-12-09 10:39:22.115526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.704 [2024-12-09 10:39:22.115737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.704 [2024-12-09 10:39:22.115756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.704 [2024-12-09 10:39:22.115767] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.704 [2024-12-09 10:39:22.115778] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.704 [2024-12-09 10:39:22.128041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.704 [2024-12-09 10:39:22.128472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.704 [2024-12-09 10:39:22.128515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.704 [2024-12-09 10:39:22.128531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:49.704 [2024-12-09 10:39:22.128775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:49.704 [2024-12-09 10:39:22.128971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:49.704 [2024-12-09 10:39:22.128989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:49.704 [2024-12-09 10:39:22.129001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:49.704 [2024-12-09 10:39:22.129012] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:49.704 [2024-12-09 10:39:22.141642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:49.704 [2024-12-09 10:39:22.142145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:49.704 [2024-12-09 10:39:22.142194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:49.704 [2024-12-09 10:39:22.142212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.012 [2024-12-09 10:39:22.142477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.012 [2024-12-09 10:39:22.142708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.012 [2024-12-09 10:39:22.142729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.012 [2024-12-09 10:39:22.142757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.012 [2024-12-09 10:39:22.142770] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.012 [2024-12-09 10:39:22.154838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.012 [2024-12-09 10:39:22.155213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.012 [2024-12-09 10:39:22.155259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.012 [2024-12-09 10:39:22.155275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.012 [2024-12-09 10:39:22.155546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.012 [2024-12-09 10:39:22.155743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.012 [2024-12-09 10:39:22.155762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.012 [2024-12-09 10:39:22.155774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.012 [2024-12-09 10:39:22.155786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.012 [2024-12-09 10:39:22.168022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.012 [2024-12-09 10:39:22.168421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.012 [2024-12-09 10:39:22.168449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.012 [2024-12-09 10:39:22.168465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.012 [2024-12-09 10:39:22.168683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.012 [2024-12-09 10:39:22.168894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.012 [2024-12-09 10:39:22.168913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.012 [2024-12-09 10:39:22.168925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.012 [2024-12-09 10:39:22.168937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.012 [2024-12-09 10:39:22.181221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.012 [2024-12-09 10:39:22.181717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.012 [2024-12-09 10:39:22.181760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.012 [2024-12-09 10:39:22.181776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.012 [2024-12-09 10:39:22.182030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.012 [2024-12-09 10:39:22.182269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.012 [2024-12-09 10:39:22.182294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.012 [2024-12-09 10:39:22.182307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.012 [2024-12-09 10:39:22.182319] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.012 [2024-12-09 10:39:22.194426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.012 [2024-12-09 10:39:22.194796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.012 [2024-12-09 10:39:22.194839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.012 [2024-12-09 10:39:22.194855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.012 [2024-12-09 10:39:22.195125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.012 [2024-12-09 10:39:22.195350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.012 [2024-12-09 10:39:22.195370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.012 [2024-12-09 10:39:22.195383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.012 [2024-12-09 10:39:22.195394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.012 [2024-12-09 10:39:22.207703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.012 [2024-12-09 10:39:22.208067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.012 [2024-12-09 10:39:22.208111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.012 [2024-12-09 10:39:22.208127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.012 [2024-12-09 10:39:22.208408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.012 [2024-12-09 10:39:22.208622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.012 [2024-12-09 10:39:22.208641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.012 [2024-12-09 10:39:22.208653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.012 [2024-12-09 10:39:22.208664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.012 [2024-12-09 10:39:22.220799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.012 [2024-12-09 10:39:22.221166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.012 [2024-12-09 10:39:22.221208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.012 [2024-12-09 10:39:22.221224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.012 [2024-12-09 10:39:22.221479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.012 [2024-12-09 10:39:22.221689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.012 [2024-12-09 10:39:22.221708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.012 [2024-12-09 10:39:22.221720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.012 [2024-12-09 10:39:22.221736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.012 [2024-12-09 10:39:22.233994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.012 [2024-12-09 10:39:22.234428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.012 [2024-12-09 10:39:22.234471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.012 [2024-12-09 10:39:22.234487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.012 [2024-12-09 10:39:22.234729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.012 [2024-12-09 10:39:22.234941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.012 [2024-12-09 10:39:22.234960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.012 [2024-12-09 10:39:22.234971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.012 [2024-12-09 10:39:22.234983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.013 [2024-12-09 10:39:22.247110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.013 [2024-12-09 10:39:22.247543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.013 [2024-12-09 10:39:22.247585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.013 [2024-12-09 10:39:22.247601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.013 [2024-12-09 10:39:22.247844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.013 [2024-12-09 10:39:22.248055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.013 [2024-12-09 10:39:22.248074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.013 [2024-12-09 10:39:22.248086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.013 [2024-12-09 10:39:22.248097] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.013 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2653103 Killed "${NVMF_APP[@]}" "$@" 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.013 [2024-12-09 10:39:22.260728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.013 [2024-12-09 10:39:22.261145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2654058 00:28:50.013 [2024-12-09 10:39:22.261174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.013 [2024-12-09 10:39:22.261191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2654058 00:28:50.013 [2024-12-09 10:39:22.261408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2654058 ']' 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.013 [2024-12-09 10:39:22.261649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.013 [2024-12-09 10:39:22.261670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.013 [2024-12-09 10:39:22.261682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.013 [2024-12-09 10:39:22.261693] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.013 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.013 [2024-12-09 10:39:22.274222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.013 [2024-12-09 10:39:22.274628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.013 [2024-12-09 10:39:22.274672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.013 [2024-12-09 10:39:22.274688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.013 [2024-12-09 10:39:22.274926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.013 [2024-12-09 10:39:22.275154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.013 [2024-12-09 10:39:22.275177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.013 [2024-12-09 10:39:22.275190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.013 [2024-12-09 10:39:22.275203] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.013 [2024-12-09 10:39:22.287694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.013 [2024-12-09 10:39:22.288107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.013 [2024-12-09 10:39:22.288158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.013 [2024-12-09 10:39:22.288176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.013 [2024-12-09 10:39:22.288422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.013 [2024-12-09 10:39:22.288641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.013 [2024-12-09 10:39:22.288660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.013 [2024-12-09 10:39:22.288673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.013 [2024-12-09 10:39:22.288684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.013 [2024-12-09 10:39:22.301084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.013 [2024-12-09 10:39:22.301483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.013 [2024-12-09 10:39:22.301513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.013 [2024-12-09 10:39:22.301529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.013 [2024-12-09 10:39:22.301774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.013 [2024-12-09 10:39:22.301992] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.013 [2024-12-09 10:39:22.302012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.013 [2024-12-09 10:39:22.302025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.013 [2024-12-09 10:39:22.302036] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.013 [2024-12-09 10:39:22.308790] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:28:50.013 [2024-12-09 10:39:22.308861] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.013 [2024-12-09 10:39:22.314571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.013 [2024-12-09 10:39:22.314931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.013 [2024-12-09 10:39:22.314959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.013 [2024-12-09 10:39:22.314975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.013 [2024-12-09 10:39:22.315202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.013 [2024-12-09 10:39:22.315425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.013 [2024-12-09 10:39:22.315460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.013 [2024-12-09 10:39:22.315473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.013 [2024-12-09 10:39:22.315486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.013 [2024-12-09 10:39:22.328201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.013 [2024-12-09 10:39:22.328685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.013 [2024-12-09 10:39:22.328726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.013 [2024-12-09 10:39:22.328742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.013 [2024-12-09 10:39:22.328966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.013 [2024-12-09 10:39:22.329212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.013 [2024-12-09 10:39:22.329233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.013 [2024-12-09 10:39:22.329246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.013 [2024-12-09 10:39:22.329259] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.013 [2024-12-09 10:39:22.341621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.013 [2024-12-09 10:39:22.341973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.013 [2024-12-09 10:39:22.342001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.013 [2024-12-09 10:39:22.342032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.013 [2024-12-09 10:39:22.342288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.013 [2024-12-09 10:39:22.342516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.013 [2024-12-09 10:39:22.342536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.013 [2024-12-09 10:39:22.342549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.013 [2024-12-09 10:39:22.342560] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.013 [2024-12-09 10:39:22.355119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.013 [2024-12-09 10:39:22.355499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.014 [2024-12-09 10:39:22.355527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.014 [2024-12-09 10:39:22.355543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.014 [2024-12-09 10:39:22.355774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.014 [2024-12-09 10:39:22.355991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.014 [2024-12-09 10:39:22.356010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.014 [2024-12-09 10:39:22.356022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.014 [2024-12-09 10:39:22.356033] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.014 [2024-12-09 10:39:22.368526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.014 [2024-12-09 10:39:22.368900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.014 [2024-12-09 10:39:22.368944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.014 [2024-12-09 10:39:22.368960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.014 [2024-12-09 10:39:22.369229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.014 [2024-12-09 10:39:22.369438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.014 [2024-12-09 10:39:22.369472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.014 [2024-12-09 10:39:22.369485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.014 [2024-12-09 10:39:22.369497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.014 [2024-12-09 10:39:22.381791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.014 [2024-12-09 10:39:22.382260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.014 [2024-12-09 10:39:22.382290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.014 [2024-12-09 10:39:22.382311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.014 [2024-12-09 10:39:22.382545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.014 [2024-12-09 10:39:22.382761] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.014 [2024-12-09 10:39:22.382780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.014 [2024-12-09 10:39:22.382792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.014 [2024-12-09 10:39:22.382804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.014 [2024-12-09 10:39:22.384617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:50.014 [2024-12-09 10:39:22.395068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.014 [2024-12-09 10:39:22.395633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.014 [2024-12-09 10:39:22.395672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.014 [2024-12-09 10:39:22.395691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.014 [2024-12-09 10:39:22.395931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.014 [2024-12-09 10:39:22.396190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.014 [2024-12-09 10:39:22.396214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.014 [2024-12-09 10:39:22.396230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.014 [2024-12-09 10:39:22.396246] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.014 [2024-12-09 10:39:22.408457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.014 [2024-12-09 10:39:22.408886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.014 [2024-12-09 10:39:22.408933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.014 [2024-12-09 10:39:22.408950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.014 [2024-12-09 10:39:22.409248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.014 [2024-12-09 10:39:22.409473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.014 [2024-12-09 10:39:22.409492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.014 [2024-12-09 10:39:22.409506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.014 [2024-12-09 10:39:22.409519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.014 [2024-12-09 10:39:22.421792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.014 [2024-12-09 10:39:22.422262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.014 [2024-12-09 10:39:22.422291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.014 [2024-12-09 10:39:22.422307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.014 [2024-12-09 10:39:22.422552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.014 [2024-12-09 10:39:22.422769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.014 [2024-12-09 10:39:22.422789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.014 [2024-12-09 10:39:22.422802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.014 [2024-12-09 10:39:22.422814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.014 [2024-12-09 10:39:22.435148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.014 [2024-12-09 10:39:22.435533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.014 [2024-12-09 10:39:22.435577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.014 [2024-12-09 10:39:22.435594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.014 [2024-12-09 10:39:22.435840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.014 [2024-12-09 10:39:22.436057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.014 [2024-12-09 10:39:22.436077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.014 [2024-12-09 10:39:22.436089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.014 [2024-12-09 10:39:22.436100] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.014 [2024-12-09 10:39:22.441925] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.014 [2024-12-09 10:39:22.441969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.014 [2024-12-09 10:39:22.441982] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.014 [2024-12-09 10:39:22.441993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.014 [2024-12-09 10:39:22.442002] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.014 [2024-12-09 10:39:22.443368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.014 [2024-12-09 10:39:22.443429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.014 [2024-12-09 10:39:22.443432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.014 [2024-12-09 10:39:22.448957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.014 [2024-12-09 10:39:22.449409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.014 [2024-12-09 10:39:22.449445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.014 [2024-12-09 10:39:22.449464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.014 [2024-12-09 10:39:22.449744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.014 [2024-12-09 10:39:22.449976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.014 [2024-12-09 10:39:22.449999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.014 [2024-12-09 10:39:22.450015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.014 [2024-12-09 10:39:22.450031] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.273 [2024-12-09 10:39:22.462704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.273 [2024-12-09 10:39:22.463191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.273 [2024-12-09 10:39:22.463232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.273 [2024-12-09 10:39:22.463252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.274 [2024-12-09 10:39:22.463493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.274 [2024-12-09 10:39:22.463712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.274 [2024-12-09 10:39:22.463733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.274 [2024-12-09 10:39:22.463749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.274 [2024-12-09 10:39:22.463764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.274 [2024-12-09 10:39:22.476351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.274 [2024-12-09 10:39:22.476893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-12-09 10:39:22.476935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.274 [2024-12-09 10:39:22.476954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.274 [2024-12-09 10:39:22.477191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.274 [2024-12-09 10:39:22.477418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.274 [2024-12-09 10:39:22.477454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.274 [2024-12-09 10:39:22.477471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.274 [2024-12-09 10:39:22.477486] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.274 [2024-12-09 10:39:22.490020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.274 [2024-12-09 10:39:22.490551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-12-09 10:39:22.490593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.274 [2024-12-09 10:39:22.490613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.274 [2024-12-09 10:39:22.490855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.274 [2024-12-09 10:39:22.491074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.274 [2024-12-09 10:39:22.491095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.274 [2024-12-09 10:39:22.491110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.274 [2024-12-09 10:39:22.491151] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.274 [2024-12-09 10:39:22.503642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.274 [2024-12-09 10:39:22.504108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-12-09 10:39:22.504151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.274 [2024-12-09 10:39:22.504182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.274 [2024-12-09 10:39:22.504407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.274 [2024-12-09 10:39:22.504651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.274 [2024-12-09 10:39:22.504672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.274 [2024-12-09 10:39:22.504687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.274 [2024-12-09 10:39:22.504701] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.274 [2024-12-09 10:39:22.517224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.274 [2024-12-09 10:39:22.517803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-12-09 10:39:22.517848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.274 [2024-12-09 10:39:22.517867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.274 [2024-12-09 10:39:22.518109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.274 [2024-12-09 10:39:22.518338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.274 [2024-12-09 10:39:22.518360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.274 [2024-12-09 10:39:22.518376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.274 [2024-12-09 10:39:22.518392] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.274 [2024-12-09 10:39:22.530855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.274 [2024-12-09 10:39:22.531313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-12-09 10:39:22.531345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.274 [2024-12-09 10:39:22.531363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.274 [2024-12-09 10:39:22.531597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.274 [2024-12-09 10:39:22.531815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.274 [2024-12-09 10:39:22.531836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.274 [2024-12-09 10:39:22.531851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.274 [2024-12-09 10:39:22.531864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.274 [2024-12-09 10:39:22.544435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.274 [2024-12-09 10:39:22.544809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-12-09 10:39:22.544838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.274 [2024-12-09 10:39:22.544854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.274 [2024-12-09 10:39:22.545071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.274 [2024-12-09 10:39:22.545343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.274 [2024-12-09 10:39:22.545365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.274 [2024-12-09 10:39:22.545380] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.274 [2024-12-09 10:39:22.545393] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.274 [2024-12-09 10:39:22.558069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.274 [2024-12-09 10:39:22.558407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-12-09 10:39:22.558436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.274 [2024-12-09 10:39:22.558453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.274 [2024-12-09 10:39:22.558671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.274 [2024-12-09 10:39:22.558894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.274 [2024-12-09 10:39:22.558915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.274 [2024-12-09 10:39:22.558929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.274 [2024-12-09 10:39:22.558942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.274 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.274 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:28:50.274 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:50.274 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:50.274 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.274 [2024-12-09 10:39:22.571817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.274 [2024-12-09 10:39:22.572193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-12-09 10:39:22.572223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.274 [2024-12-09 10:39:22.572239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.274 [2024-12-09 10:39:22.572456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.274 [2024-12-09 10:39:22.572679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.274 [2024-12-09 10:39:22.572701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.274 [2024-12-09 10:39:22.572716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.274 [2024-12-09 10:39:22.572729] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.274 [2024-12-09 10:39:22.585539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.274 [2024-12-09 10:39:22.585892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.274 [2024-12-09 10:39:22.585921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.274 [2024-12-09 10:39:22.585937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.274 [2024-12-09 10:39:22.586170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.274 [2024-12-09 10:39:22.586393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.274 [2024-12-09 10:39:22.586415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.275 [2024-12-09 10:39:22.586429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.275 [2024-12-09 10:39:22.586457] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.275 [2024-12-09 10:39:22.593975] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.275 [2024-12-09 10:39:22.599111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.275 [2024-12-09 10:39:22.599447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-12-09 10:39:22.599475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.275 [2024-12-09 10:39:22.599491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.275 [2024-12-09 10:39:22.599708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.275 [2024-12-09 10:39:22.599929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.275 [2024-12-09 10:39:22.599951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.275 [2024-12-09 10:39:22.599964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.275 [2024-12-09 10:39:22.599977] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.275 [2024-12-09 10:39:22.612837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.275 [2024-12-09 10:39:22.613275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-12-09 10:39:22.613307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.275 [2024-12-09 10:39:22.613324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.275 [2024-12-09 10:39:22.613559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.275 [2024-12-09 10:39:22.613768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.275 [2024-12-09 10:39:22.613788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.275 [2024-12-09 10:39:22.613802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.275 [2024-12-09 10:39:22.613824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.275 [2024-12-09 10:39:22.626419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.275 [2024-12-09 10:39:22.626784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-12-09 10:39:22.626812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.275 [2024-12-09 10:39:22.626829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.275 [2024-12-09 10:39:22.627061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.275 [2024-12-09 10:39:22.627321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.275 [2024-12-09 10:39:22.627343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.275 [2024-12-09 10:39:22.627357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.275 [2024-12-09 10:39:22.627370] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.275 [2024-12-09 10:39:22.639970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.275 [2024-12-09 10:39:22.640510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-12-09 10:39:22.640551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.275 [2024-12-09 10:39:22.640571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.275 [2024-12-09 10:39:22.640812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.275 [2024-12-09 10:39:22.641032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.275 [2024-12-09 10:39:22.641052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.275 [2024-12-09 10:39:22.641067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.275 [2024-12-09 10:39:22.641083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.275 Malloc0 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.275 [2024-12-09 10:39:22.653787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.275 [2024-12-09 10:39:22.654162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:50.275 [2024-12-09 10:39:22.654192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb5e660 with addr=10.0.0.2, port=4420 00:28:50.275 [2024-12-09 10:39:22.654209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb5e660 is same with the state(6) to be set 00:28:50.275 [2024-12-09 10:39:22.654426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb5e660 (9): Bad file descriptor 00:28:50.275 [2024-12-09 10:39:22.654666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:28:50.275 [2024-12-09 10:39:22.654687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:28:50.275 [2024-12-09 10:39:22.654700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:28:50.275 [2024-12-09 10:39:22.654712] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:50.275 [2024-12-09 10:39:22.662975] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.275 10:39:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2653391 00:28:50.275 [2024-12-09 10:39:22.667526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:28:50.275 3714.83 IOPS, 14.51 MiB/s [2024-12-09T09:39:22.716Z] [2024-12-09 10:39:22.693875] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:28:52.581 4307.43 IOPS, 16.83 MiB/s [2024-12-09T09:39:25.956Z] 4794.50 IOPS, 18.73 MiB/s [2024-12-09T09:39:26.891Z] 5183.33 IOPS, 20.25 MiB/s [2024-12-09T09:39:27.824Z] 5514.00 IOPS, 21.54 MiB/s [2024-12-09T09:39:28.759Z] 5749.36 IOPS, 22.46 MiB/s [2024-12-09T09:39:29.694Z] 5932.75 IOPS, 23.17 MiB/s [2024-12-09T09:39:31.068Z] 6098.54 IOPS, 23.82 MiB/s [2024-12-09T09:39:32.002Z] 6237.71 IOPS, 24.37 MiB/s 00:28:59.561 Latency(us) 00:28:59.561 [2024-12-09T09:39:32.002Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.561 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:59.561 Verification LBA range: start 0x0 length 0x4000 00:28:59.561 Nvme1n1 : 15.01 6369.33 24.88 9751.36 0.00 7916.50 2597.17 20486.07 00:28:59.561 [2024-12-09T09:39:32.002Z] =================================================================================================================== 00:28:59.561 [2024-12-09T09:39:32.002Z] Total : 6369.33 24.88 9751.36 0.00 7916.50 2597.17 20486.07 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:59.561 10:39:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:59.561 rmmod nvme_tcp 00:28:59.561 rmmod nvme_fabrics 00:28:59.561 rmmod nvme_keyring 00:28:59.875 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:59.875 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:59.875 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:59.875 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2654058 ']' 00:28:59.875 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2654058 00:28:59.876 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2654058 ']' 00:28:59.876 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2654058 00:28:59.876 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:59.876 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.876 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2654058 00:28:59.876 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:59.876 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:59.876 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2654058' 00:28:59.876 killing process with pid 2654058 00:28:59.876 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2654058 00:28:59.876 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2654058 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.134 10:39:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.037 10:39:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:02.037 00:29:02.037 real 0m22.660s 00:29:02.037 user 0m59.363s 00:29:02.037 sys 0m4.884s 00:29:02.037 10:39:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.037 10:39:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:02.037 ************************************ 00:29:02.037 END TEST nvmf_bdevperf 00:29:02.037 ************************************ 00:29:02.037 10:39:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:02.037 10:39:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:02.037 10:39:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.037 10:39:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:02.037 ************************************ 00:29:02.037 START TEST nvmf_target_disconnect 00:29:02.037 ************************************ 00:29:02.037 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:02.295 * Looking for test storage... 00:29:02.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:02.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.295 --rc genhtml_branch_coverage=1 00:29:02.295 --rc genhtml_function_coverage=1 00:29:02.295 --rc genhtml_legend=1 00:29:02.295 --rc geninfo_all_blocks=1 00:29:02.295 --rc geninfo_unexecuted_blocks=1 00:29:02.295 00:29:02.295 ' 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:02.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.295 --rc genhtml_branch_coverage=1 00:29:02.295 --rc genhtml_function_coverage=1 00:29:02.295 --rc genhtml_legend=1 00:29:02.295 --rc geninfo_all_blocks=1 00:29:02.295 --rc geninfo_unexecuted_blocks=1 00:29:02.295 00:29:02.295 ' 00:29:02.295 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:02.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.295 --rc genhtml_branch_coverage=1 00:29:02.295 --rc genhtml_function_coverage=1 00:29:02.296 --rc genhtml_legend=1 00:29:02.296 --rc geninfo_all_blocks=1 00:29:02.296 --rc geninfo_unexecuted_blocks=1 00:29:02.296 00:29:02.296 ' 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:02.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.296 --rc genhtml_branch_coverage=1 00:29:02.296 --rc genhtml_function_coverage=1 00:29:02.296 --rc genhtml_legend=1 00:29:02.296 --rc geninfo_all_blocks=1 00:29:02.296 --rc geninfo_unexecuted_blocks=1 00:29:02.296 00:29:02.296 ' 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:02.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:02.296 10:39:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:04.827 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:04.827 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:04.827 Found net devices under 0000:09:00.0: cvl_0_0 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.827 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:04.827 Found net devices under 0000:09:00.1: cvl_0_1 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:04.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:29:04.828 00:29:04.828 --- 10.0.0.2 ping statistics --- 00:29:04.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.828 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:29:04.828 00:29:04.828 --- 10.0.0.1 ping statistics --- 00:29:04.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.828 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.828 ************************************ 00:29:04.828 START TEST nvmf_target_disconnect_tc1 00:29:04.828 ************************************ 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:29:04.828 10:39:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:04.828 [2024-12-09 10:39:37.083420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.828 [2024-12-09 10:39:37.083493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x958f40 with addr=10.0.0.2, port=4420 00:29:04.828 [2024-12-09 10:39:37.083525] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:04.828 [2024-12-09 10:39:37.083553] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:04.828 [2024-12-09 10:39:37.083567] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:04.828 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:04.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:04.828 Initializing NVMe Controllers 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:29:04.828 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.829 00:29:04.829 real 0m0.129s 00:29:04.829 user 0m0.075s 00:29:04.829 sys 0m0.053s 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:04.829 ************************************ 00:29:04.829 END TEST nvmf_target_disconnect_tc1 00:29:04.829 ************************************ 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:04.829 ************************************ 00:29:04.829 START TEST nvmf_target_disconnect_tc2 00:29:04.829 ************************************ 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2657217 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2657217 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2657217 ']' 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.829 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:04.829 [2024-12-09 10:39:37.234793] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:29:04.829 [2024-12-09 10:39:37.234884] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.143 [2024-12-09 10:39:37.308674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:05.143 [2024-12-09 10:39:37.367400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.143 [2024-12-09 10:39:37.367454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.143 [2024-12-09 10:39:37.367477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.143 [2024-12-09 10:39:37.367487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.143 [2024-12-09 10:39:37.367497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.143 [2024-12-09 10:39:37.369035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:05.143 [2024-12-09 10:39:37.369097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:05.143 [2024-12-09 10:39:37.369164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:05.143 [2024-12-09 10:39:37.369169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.143 Malloc0 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.143 [2024-12-09 10:39:37.546864] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:05.143 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.428 [2024-12-09 10:39:37.575137] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2657247 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:05.428 10:39:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:07.342 10:39:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2657217 00:29:07.342 10:39:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 [2024-12-09 10:39:39.604459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Write completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.342 Read completed with error (sct=0, sc=8) 00:29:07.342 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 [2024-12-09 10:39:39.604751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 [2024-12-09 10:39:39.605064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Write completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 Read completed with error (sct=0, sc=8) 00:29:07.343 starting I/O failed 00:29:07.343 [2024-12-09 10:39:39.605356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:07.343 [2024-12-09 10:39:39.605589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.605632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.605735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.605763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.605874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.605912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.606034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.606060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.606184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.606212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.606303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.606335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.606424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.606453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.606564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.606590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.606708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.606737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.606827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.606854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.607005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.607032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.607121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.607155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.607288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.607328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.607457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.607485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.607607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.607634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.607751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.607778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.607944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.607971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.608055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.608082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.608175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.608200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.608332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.608359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.608480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.608507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.608645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.608671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.608784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.608811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.608922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.608948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.609062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.609088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.609231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.609272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.609369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.609398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.609515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.609542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.609654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.609680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.609787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.609814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.609948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.609975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.343 qpair failed and we were unable to recover it. 00:29:07.343 [2024-12-09 10:39:39.610100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.343 [2024-12-09 10:39:39.610127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.610245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.610293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.610456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.610486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.610637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.610666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.610754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.610781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.610891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.610917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.610993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.611017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.611112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.611146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.611260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.611286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.611365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.611390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.611506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.611532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.611643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.611669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.611792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.611818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.611951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.611977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.612073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.612113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.612255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.612296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.612390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.612418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.612506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.612533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.612653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.612679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.612760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.612786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.612898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.612926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.613008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.613039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.613129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.613166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.613258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.613286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.613376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.613402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.613513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.613540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.613661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.613688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.613774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.613801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.613947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.613975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.614090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.614118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.614243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.614271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.614367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.614394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.614501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.614527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.614619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.614645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.614769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.614798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.614886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.614914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.615034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.615061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.615193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.615221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.615309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.615334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.615424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.615452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.615565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.615592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.615708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.615739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.615859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.615885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.615991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.616019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.616134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.616171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.616286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.616314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.616403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.616432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.616531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.616574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.616699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.616727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.616914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.616941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.617023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.617051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.617179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.617218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.617336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.617365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.617485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.617512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.617630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.617657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.617744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.617771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.617896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.617923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.618030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.618056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.618149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.618180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.618269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.618297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.344 [2024-12-09 10:39:39.618391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.344 [2024-12-09 10:39:39.618418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.344 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.618491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.618517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.618636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.618665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.618779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.618807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.618918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.618946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.619057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.619084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.619166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.619192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.619304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.619330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.619453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.619482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.619625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.619651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.619802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.619831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.619973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.620000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.620087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.620114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.620240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.620269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.620393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.620421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.620537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.620563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.620645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.620672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.620813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.620840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.620946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.620972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.621055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.621083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.621227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.621256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.621362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.621408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.621579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.621628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.621745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.621773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.621893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.621920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.622042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.622082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.622206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.622234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.622352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.622379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.622492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.622519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.622635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.622662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.622773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.622800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.622920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.622961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.623084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.623114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.623216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.623245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.623341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.623369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.623457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.623484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.623597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.623624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.623716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.623744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.623863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.623890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.623972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.623997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.624148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.624177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.624288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.624314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.624398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.624425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.624565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.624591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.624681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.624707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.624843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.624869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.624975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.625003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.625154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.625186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.625282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.625322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.625431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.625460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.625570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.625598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.625743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.625770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.625908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.625957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.626094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.626135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.626248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.626279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.626393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.345 [2024-12-09 10:39:39.626421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.345 qpair failed and we were unable to recover it. 00:29:07.345 [2024-12-09 10:39:39.626543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.626570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.626702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.626750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.626840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.626868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.626974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.627001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.627105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.627151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.627279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.627307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.627407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.627434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.627546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.627573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.627661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.627690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.627783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.627811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.627895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.627924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.628012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.628039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.628123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.628156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.628248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.628277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.628363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.628391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.628513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.628542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.628637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.628665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.628781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.628808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.628894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.628922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.629079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.629106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.629213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.629253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.629378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.629407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.629496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.629524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.629637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.629663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.629762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.629797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.629933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.629961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.630083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.630110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.630213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.630243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.630357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.630384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.630520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.630546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.630635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.630663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.630745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.630772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.630887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.630920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.631037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.631065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.631211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.631240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.631380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.631407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.631496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.631525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.631712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.631777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.631865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.631893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.632047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.632088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.632244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.632272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.632416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.632444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.632557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.632584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.632669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.632697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.632853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.632886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.633042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.633070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.633210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.633237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.633375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.633402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.633487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.633513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.633655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.633681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.633760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.633788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.633880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.633907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.634045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.634097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.634202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.634229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.634339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.634365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.634578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.634611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.634723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.634771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.634910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.634936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.635047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.635074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.346 [2024-12-09 10:39:39.635162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.346 [2024-12-09 10:39:39.635194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.346 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.635338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.635368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.635463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.635490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.635599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.635627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.635802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.635855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.635975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.636003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.636121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.636154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.636242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.636269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.636357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.636384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.636499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.636526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.636659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.636711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.636826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.636854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.636950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.636991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.637089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.637118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.637234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.637263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.637346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.637374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.637517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.637543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.637633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.637661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.637800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.637827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.637941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.637969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.638051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.638078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.638191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.638219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.638312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.638340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.638424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.638452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.638559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.638586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.638727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.638754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.638867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.638894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.639017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.639047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.639164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.639211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.639354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.639381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.639468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.639496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.639576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.639602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.639741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.639767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.639880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.639908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.640002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.640030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.640148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.640177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.640286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.640314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.640395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.640420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.640511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.640538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.640620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.640647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.640735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.640769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.640865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.640904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.641048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.641077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.641215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.641243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.641352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.641379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.641489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.641515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.641662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.641689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.641823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.641849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.641945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.641985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.642132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.642166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.642261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.642290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.642434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.642462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.642601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.642628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.642749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.642777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.642901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.642929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.643058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.643088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.643213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.643241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.643326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.643353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.643489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.643553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.643754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.643810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.643981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.644035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.644153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.347 [2024-12-09 10:39:39.644181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.347 qpair failed and we were unable to recover it. 00:29:07.347 [2024-12-09 10:39:39.644320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.644347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.644490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.644517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.644655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.644681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.644766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.644792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.644931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.644957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.645075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.645108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.645250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.645291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.645439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.645469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.645583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.645610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.645752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.645816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.645927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.645955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.646044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.646071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.646194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.646223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.646358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.646385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.646497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.646524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.646615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.646643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.646782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.646808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.646934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.646973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.647103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.647131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.647235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.647263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.647349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.647376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.647516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.647543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.647654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.647681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.647804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.647832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.647923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.647950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.648104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.648149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.648277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.648305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.648424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.648451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.648539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.648565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.648651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.648678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.648882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.648933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.649041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.649068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.649180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.649221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.649341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.649370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.649483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.649510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.649651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.649678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.649764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.649791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.649931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.649957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.650054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.650082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.650226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.650266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.650363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.650392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.650506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.650534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.650705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.650757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.650842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.650870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.651012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.651039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.651161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.651193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.651313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.651343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.651565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.651625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.651795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.651821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.651961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.651992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.652083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.652109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.652234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.652261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.652387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.652427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.652584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.652614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.652697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.652725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.652808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.348 [2024-12-09 10:39:39.652833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.348 qpair failed and we were unable to recover it. 00:29:07.348 [2024-12-09 10:39:39.652973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.653000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.653119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.653158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.653246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.653271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.653390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.653416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.653509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.653536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.653621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.653648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.653788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.653815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.653961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.653990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.654132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.654167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.654252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.654278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.654420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.654448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.654564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.654591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.654767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.654796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.654910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.654937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.655058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.655086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.655209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.655237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.655315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.655347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.655474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.655515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.655661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.655690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.655813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.655842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.655960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.655987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.656081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.656108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.656193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.656220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.656305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.656332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.656472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.656499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.656583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.656610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.656697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.656726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.656837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.656864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.656953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.656981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.657093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.657120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.657249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.657276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.657416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.657443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.657549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.657576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.657720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.657747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.657877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.657916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.658008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.658037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.658159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.658187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.658299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.658326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.658408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.658435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.658546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.658572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.658649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.658675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.658770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.658797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.658912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.658938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.659030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.659059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.659150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.659179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.659265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.659291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.659372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.659399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.659508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.659535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.659647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.659674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.659759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.659786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.659925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.659953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.660069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.660097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.660222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.660251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.660345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.660372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.660459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.660486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.660603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.660631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.660718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.660750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.660882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.660922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.661040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.661069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.349 [2024-12-09 10:39:39.661158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.349 [2024-12-09 10:39:39.661184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.349 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.661296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.661323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.661416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.661445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.661564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.661591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.661691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.661719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.661877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.661906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.662003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.662043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.662167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.662195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.662337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.662364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.662478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.662505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.662615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.662642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.662733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.662761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.662848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.662875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.662992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.663021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.663113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.663148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.663248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.663275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.663361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.663388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.663523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.663571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.663654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.663680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.663798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.663825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.663941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.663970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.664103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.664152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.664277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.664306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.664395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.664423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.664571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.664599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.664686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.664713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.664816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.664848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.664951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.664978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.665089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.665116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.665260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.665287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.665398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.665424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.665532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.665558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.665673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.665700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.665792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.665821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.665940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.665969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.666083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.666112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.666233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.666260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.666346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.666372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.666489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.666517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.666635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.666663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.666749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.666775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.666923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.666950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.667072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.667099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.667230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.667270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.667373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.667414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.667536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.667565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.667694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.667742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.667852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.667881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.667991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.668019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.668159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.668187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.668303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.668330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.668454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.668482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.668568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.668595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.668694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.668721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.668866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.668896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.668981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.669010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.669104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.669157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.669283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.669312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.669450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.669477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.669583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.669609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.350 qpair failed and we were unable to recover it. 00:29:07.350 [2024-12-09 10:39:39.669749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.350 [2024-12-09 10:39:39.669776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.669923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.669950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.670055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.670081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.670223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.670252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.670371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.670403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.670516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.670543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.670676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.670703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.670788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.670815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.670898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.670924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.671075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.671102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.671204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.671231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.671324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.671351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.671488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.671515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.671630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.671657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.671830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.671899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.672025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.672053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.672137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.672171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.672289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.672316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.672428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.672455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.672541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.672568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.672648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.672676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.672774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.672805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.672938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.672979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.673130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.673171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.673288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.673316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.673395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.673422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.673508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.673536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.673756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.673810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.673890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.673916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.674030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.674057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.674173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.674200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.674283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.674315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.674398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.674425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.674505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.674532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.674608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.674634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.674721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.674751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.674856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.674897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.675046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.675076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.675160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.675188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.675322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.675349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.675488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.675516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.675643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.675689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.675803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.675832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.675976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.676003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.676119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.676152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.676277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.676305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.676430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.676460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.676563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.676611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.676700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.676727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.676836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.676862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.676977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.677004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.677120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.677154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.677246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.677273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.677364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.677391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.677499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.677526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.677615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.677643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.677799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.351 [2024-12-09 10:39:39.677839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.351 qpair failed and we were unable to recover it. 00:29:07.351 [2024-12-09 10:39:39.677965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.677993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.678106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.678135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.678229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.678256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.678372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.678399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.678486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.678514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.678624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.678651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.678734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.678761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.678853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.678893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.679042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.679071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.679177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.679205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.679288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.679315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.679399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.679425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.679505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.679532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.679637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.679664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.679772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.679804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.679923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.679953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.680077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.680106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.680202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.680231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.680321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.680348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.680459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.680487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.680604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.680632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.680769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.680797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.680929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.680958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.681101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.681129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.681259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.681287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.681367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.681393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.681519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.681566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.681703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.681750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.681866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.681893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.682010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.682037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.682125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.682160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.682279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.682306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.682421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.682449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.682561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.682609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.682722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.682750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.682862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.682889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.683010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.683038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.683132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.683168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.683281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.683310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.683400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.683427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.683509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.683536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.683700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.683753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.683829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.683855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.683935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.683963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.684074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.684102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.684253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.684281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.684399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.684425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.684565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.684598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.684750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.684799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.684909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.684936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.685039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.685078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.685202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.685230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.685349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.685378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.685501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.685528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.685668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.685714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.685889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.685935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.686041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.686078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.686188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.686215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.686323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.686350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.686496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.686523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.352 qpair failed and we were unable to recover it. 00:29:07.352 [2024-12-09 10:39:39.686635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.352 [2024-12-09 10:39:39.686662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.686744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.686769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.686878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.686905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.687042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.687068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.687165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.687206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.687298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.687326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.687437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.687464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.687576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.687602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.687746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.687775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.687878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.687918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.688072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.688100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.688233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.688260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.688340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.688367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.688490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.688517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.688630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.688657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.688772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.688801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.688893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.688924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.689037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.689066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.689181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.689208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.689320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.689346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.689483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.689510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.689617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.689672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.689842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.689890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.690000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.690027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.690148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.690175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.690286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.690313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.690407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.690434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.690521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.690547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.690630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.690658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.690768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.690794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.690882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.690909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.691016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.691056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.691202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.691231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.691356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.691396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.691522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.691550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.691708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.691736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.691859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.691891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.692050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.692077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.692221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.692251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.692369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.692397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.692491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.692518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.692661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.692689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.692802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.692829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.692921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.692949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.693068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.693095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.693184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.693214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.693327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.693354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.693468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.693495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.693576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.693609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.693725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.693753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.693871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.693898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.693980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.694007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.694120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.694157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.694249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.694276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.694394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.694421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.694540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.694566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.694650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.694676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.694793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.694819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.694908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.694948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.695095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.695124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.695223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.353 [2024-12-09 10:39:39.695251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.353 qpair failed and we were unable to recover it. 00:29:07.353 [2024-12-09 10:39:39.695336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.695362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.695507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.695534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.695646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.695674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.695814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.695841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.695982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.696022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.696124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.696159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.696258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.696287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.696375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.696402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.696538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.696586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.696765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.696793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.696910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.696937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.697067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.697108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.697240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.697269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.697391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.697420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.697545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.697573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.697691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.697719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.697861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.697888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.697972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.698000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.698086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.698116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.698240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.698270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.698360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.698388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.698485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.698512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.698621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.698668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.698794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.698826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.698960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.698986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.699070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.699096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.699190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.699219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.699307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.699340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.699424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.699452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.699564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.699591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.699738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.699768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.699881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.699910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.700029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.700057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.700175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.700202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.700316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.700344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.700433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.700459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.700541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.700569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.700656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.700683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.700819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.700846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.700966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.700993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.701117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.701172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.701327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.701357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.701506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.701534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.701615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.701643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.701769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.701826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.701968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.701995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.702083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.702110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.702209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.702240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.702386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.702414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.702635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.702661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.702736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.702768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.702860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.702887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.702970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.702996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.703074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.703099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.703220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.354 [2024-12-09 10:39:39.703250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.354 qpair failed and we were unable to recover it. 00:29:07.354 [2024-12-09 10:39:39.703396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.703423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.703536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.703562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.703674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.703701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.703785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.703812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.703958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.703984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.704123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.704157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.704281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.704309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.704445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.704506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.704617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.704644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.704722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.704748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.704840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.704867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.704976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.705002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.705097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.705123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.705248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.705275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.705382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.705409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.705544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.705571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.705707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.705733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.705877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.705903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.706001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.706027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.706113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.706144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.706231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.706256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.706363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.706390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.706486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.706512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.706627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.706653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.706771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.706797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.706915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.706941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.707042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.707082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.707225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.707265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.707381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.707410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.707503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.707530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.707644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.707671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.707782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.707809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.707896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.707924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.708037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.708063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.708153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.708180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.708292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.708319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.708437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.708463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.708676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.708734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.708875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.708902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.708993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.709030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.709161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.709201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.709307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.709346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.709448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.709477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.709587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.709614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.709772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.709824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.709916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.709945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.710062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.710089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.710179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.710206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.710301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.710328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.710469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.710496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.710606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.710633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.710723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.710751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.710865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.710894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.710996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.711027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.711111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.711156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.711302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.711330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.711442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.711470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.711607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.711635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.711788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.711817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.711898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.711924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.712031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.712057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.355 qpair failed and we were unable to recover it. 00:29:07.355 [2024-12-09 10:39:39.712208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.355 [2024-12-09 10:39:39.712235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.712349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.712376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.712490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.712516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.712612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.712641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.712752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.712780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.712889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.712922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.713038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.713066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.713172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.713200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.713338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.713365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.713488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.713516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.713627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.713653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.713755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.713794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.713879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.713908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.714050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.714077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.714194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.714223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.714361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.714389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.714532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.714559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.714698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.714726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.714840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.714866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.714989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.715017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.715134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.715167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.715278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.715305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.715419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.715446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.715562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.715589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.715684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.715710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.715823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.715851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.715929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.715954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.716097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.716124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.716260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.716301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.716417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.716445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.716532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.716558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.716676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.716702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.716817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.716844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.716954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.716980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.717097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.717123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.717220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.717247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.717325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.717350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.717442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.717468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.717563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.717590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.717702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.717729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.717823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.717851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.717993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.718021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.718159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.718186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.718266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.718291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.718373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.718400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.718488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.718521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.718606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.718633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.718748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.718776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.718900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.718941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.719057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.719087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.719222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.719262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.719388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.719416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.719586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.719634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.719716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.719742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.719849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.719877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.719988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.720016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.720132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.720165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.720290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.720317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.720450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.720477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.720605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.720633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.720749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.720776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.356 qpair failed and we were unable to recover it. 00:29:07.356 [2024-12-09 10:39:39.720893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.356 [2024-12-09 10:39:39.720921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.721012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.721039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.721153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.721179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.721266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.721292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.721375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.721402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.721512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.721537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.721627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.721653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.721762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.721788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.721920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.721960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.722081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.722109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.722209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.722249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.722371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.722398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.722515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.722540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.722653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.722679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.722788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.722814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.722950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.722980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.723123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.723156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.723245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.723273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.723354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.723381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.723458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.723485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.723600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.723628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.723720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.723748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.723839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.723864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.724001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.724027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.724152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.724180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.724279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.724309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.724393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.724420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.724505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.724532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.724612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.724638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.724739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.724780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.724905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.724933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.725059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.725087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.725224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.725251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.725364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.725389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.725474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.725500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.725614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.725639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.725749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.725775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.725858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.725887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.725982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.726009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.726125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.726158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.726286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.726313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.726399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.726428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.726567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.726594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.726707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.726754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.726887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.726932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.727012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.727039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.727187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.727214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.727358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.727385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.727493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.727520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.727601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.727628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.727712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.727738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.727842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.727874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.728026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.728066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.728163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.728192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.728309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.728336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.728462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.728495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.357 [2024-12-09 10:39:39.728677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.357 [2024-12-09 10:39:39.728728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.357 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.728842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.728869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.729004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.729032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.729154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.729185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.729309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.729349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.729465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.729493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.729634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.729661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.729760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.729793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.729894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.729921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.730069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.730097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.730216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.730243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.730351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.730378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.730488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.730515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.730594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.730621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.730720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.730748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.730875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.730917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.731053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.731092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.731204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.731236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.731384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.731412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.731554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.731581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.731702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.731730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.731837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.731871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.731987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.732018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.732110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.732137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.732271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.732299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.732437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.732464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.732572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.732605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.732709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.732736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.732828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.732856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.732972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.732998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.733122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.733164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.733304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.733330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.733410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.733436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.733550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.733576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.733689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.733716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.733796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.733823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.733933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.733960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.734069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.734095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.734215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.734243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.734358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.734385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.734520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.734546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.734653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.734680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.734761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.734788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.734881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.734912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.735009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.735049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.735153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.735181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.735303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.735330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.735432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.735464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.735608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.735656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.735749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.735777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.735894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.735921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.736049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.736089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.736213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.736243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.736337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.736364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.736444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.736471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.736589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.736618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.736711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.736740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.736833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.736862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.737011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.737040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.737153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.737180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.737289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.737316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.358 qpair failed and we were unable to recover it. 00:29:07.358 [2024-12-09 10:39:39.737404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.358 [2024-12-09 10:39:39.737431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.737539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.737570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.737685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.737711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.737830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.737856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.737976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.738005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.738116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.738148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.738242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.738269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.738354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.738381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.738460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.738486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.738599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.738627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.738759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.738810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.738900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.738930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.739044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.739072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.739214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.739242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.739364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.739392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.739545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.739572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.739655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.739681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.739795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.739824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.739973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.740000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.740115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.740149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.740243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.740270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.740375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.740403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.740494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.740521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.740595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.740620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.740798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.740850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.740972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.741013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.741163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.741193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.741311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.741338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.741483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.741512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.741592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.741618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.741797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.741824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.741972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.741999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.742090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.742117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.742241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.742270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.742443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.742501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.742687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.742713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.742850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.742875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.742998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.743025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.743111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.743136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.743268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.743296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.743417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.743446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.743561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.743588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.743752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.743807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.743917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.743943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.744045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.744072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.744209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.744236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.744350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.744377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.744466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.744493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.744613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.744641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.744759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.744788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.744923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.744948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.745033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.745059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.745145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.745170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.745287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.745314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.745403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.745431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.745543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.745571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.745659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.745687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.745763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.745789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.745877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.745903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.746040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.359 [2024-12-09 10:39:39.746067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.359 qpair failed and we were unable to recover it. 00:29:07.359 [2024-12-09 10:39:39.746158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.746185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.746323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.746350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.746462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.746489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.746625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.746651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.746791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.746817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.746906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.746932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.747015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.747042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.747122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.747153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.747264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.747296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.747416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.747444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.747562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.747588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.747703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.747729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.747872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.747899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.748014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.748041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.748150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.748177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.748330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.748370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.748495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.748522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.748638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.748664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.748782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.748809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.748893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.748919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.749004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.749030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.749169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.749196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.749322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.749348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.749465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.749491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.749601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.749629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.749782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.749823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.749922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.749951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.750097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.750125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.750252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.750281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.750391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.750419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.750560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.750587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.750679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.750707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.750824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.750864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.751013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.751042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.751164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.751193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.751302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.751335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.751458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.751488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.751630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.751657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.751808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.751836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.751918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.751943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.752091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.752117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.752215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.752243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.752351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.752379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.752493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.752520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.752614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.752642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.752807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.752854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.752941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.752970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.753065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.753093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.753194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.753224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.753319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.753347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.753486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.753534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.753673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.753720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.753860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.753887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.753978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.754006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.754096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.754123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.754212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.754238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.360 [2024-12-09 10:39:39.754352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.360 [2024-12-09 10:39:39.754377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.360 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.754486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.754513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.754620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.754646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.754756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.754784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.754894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.754921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.755013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.755040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.755164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.755193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.755331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.755358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.755449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.755474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.755585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.755611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.755729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.755755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.755872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.755902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.756040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.756067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.756209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.756237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.756350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.756377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.756492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.756519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.756629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.756656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.756780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.756809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.756929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.756958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.757085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.757131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.757245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.757274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.757358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.757386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.757462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.757488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.757566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.757591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.757698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.757723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.757832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.757857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.757970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.757997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.758114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.758150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.758238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.758267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.758354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.758384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.758497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.758524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.758651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.758699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.758836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.758885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.759003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.759030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.759133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.759171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.759290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.759316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.759401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.759427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.759565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.759590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.759701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.759727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.759839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.759864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.760011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.760039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.760135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.760169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.760295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.760323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.760438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.760464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.760577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.760604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.760698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.760726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.760842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.760873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.760965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.760994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.761111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.761145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.761233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.761262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.761403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.761428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.761547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.761573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.761713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.761739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.761851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.761877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.761960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.761987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.762106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.762132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.762232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.762260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.762351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.762379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.762495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.762522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.762615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.762642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.762732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.361 [2024-12-09 10:39:39.762760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.361 qpair failed and we were unable to recover it. 00:29:07.361 [2024-12-09 10:39:39.762902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.762930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.763019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.763047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.763174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.763214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.763323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.763364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.763459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.763486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.763633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.763659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.763772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.763798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.763882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.763908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.763996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.764021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.764162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.764192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.764303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.764330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.764443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.764471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.764591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.764618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.764759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.764786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.764929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.764957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.765096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.765123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.765216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.765244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.765338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.765365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.765481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.765508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.765594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.765621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.765712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.765740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.765821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.765849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.765941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.765968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.766083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.766108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.766231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.766258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.766341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.766372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.766506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.766532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.766643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.766668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.766746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.766772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.766859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.766884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.767019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.767046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.362 [2024-12-09 10:39:39.767184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.362 [2024-12-09 10:39:39.767210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.362 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.767311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.767336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.767455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.767480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.767586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.767612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.767710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.767740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.767847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.767888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.767991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.768035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.768171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.768202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.768307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.768335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.768450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.768478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.768593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.768621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.768735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.768763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.768853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.768881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.769023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.769050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.769162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.769189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.769312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.769339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.769413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.769439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.769552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.769580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.769674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.769700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.769787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.769817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.769907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.769935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.770037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.770077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.770203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.770233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.770357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.770384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.770518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.770546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.770661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.770688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.770872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.770901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.771047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.771074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.771192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.771221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.771316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.771344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.771461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.771489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.771603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.771629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.771781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.771809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.771900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.771930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.772050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.772082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.772181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.772207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.772311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.772338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.772431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.636 [2024-12-09 10:39:39.772457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.636 qpair failed and we were unable to recover it. 00:29:07.636 [2024-12-09 10:39:39.772587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.772631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.772751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.772794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.772905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.772930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.773043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.773070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.773206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.773246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.773372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.773400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.773511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.773538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.773673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.773700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.773813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.773839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.773957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.773985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.774086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.774114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.774227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.774268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.774363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.774392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.774510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.774537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.774629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.774656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.774739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.774770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.774878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.774907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.775022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.775050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.775159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.775186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.775271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.775296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.775377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.775404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.775497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.775524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.775610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.775639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.775720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.775754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.775850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.775881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.776023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.776051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.776164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.776192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.776277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.776305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.776444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.776472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.776588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.776616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.776723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.776750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.776862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.776889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.776968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.776996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.777124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.777175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.777263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.777292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.777377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.777405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.777525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.777552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.777699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.777746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.777855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.777882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.778003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.778032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.778121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.778161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.778248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.778276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.778417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.778443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.778577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.778603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.778745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.778794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.778900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.778926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.779062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.779088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.779177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.779203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.779316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.779343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.779456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.779481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.779623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.779649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.779735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.779761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.779875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.779900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.780001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.780042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.780198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.780228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.780356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.780396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.780491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.780518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.780606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.780631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.637 [2024-12-09 10:39:39.780711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.637 [2024-12-09 10:39:39.780737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.637 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.780846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.780872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.780962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.781002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.781161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.781191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.781282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.781309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.781387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.781419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.781549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.781595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.781673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.781699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.781865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.781913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.781998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.782027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.782175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.782203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.782296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.782324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.782466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.782493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.782583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.782610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.782697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.782726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.782836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.782863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.782985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.783025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.783177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.783207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.783293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.783320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.783495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.783551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.783691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.783747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.783929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.783989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.784095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.784122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.784243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.784270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.784386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.784413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.784490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.784515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.784737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.784793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.784887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.784915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.785036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.785062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.785153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.785179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.785297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.785323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.785407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.785431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.785546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.785579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.785703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.785732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.785846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.785874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.785986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.786012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.786151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.786178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.786268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.786296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.786412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.786440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.786559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.786585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.786674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.786701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.786790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.786818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.786932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.786958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.787078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.787104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.787229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.787255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.787347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.787373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.787487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.787514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.787621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.787647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.787734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.787763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.787856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.787883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.787996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.788025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.788107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.788134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.788233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.788260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.788343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.788375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.788450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.788477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.788574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.788601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.788713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.788741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.788884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.788912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.638 qpair failed and we were unable to recover it. 00:29:07.638 [2024-12-09 10:39:39.789002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.638 [2024-12-09 10:39:39.789028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.789149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.789179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.789268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.789297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.789412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.789439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.789560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.789588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.789678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.789706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.789870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.789909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.790004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.790032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.790129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.790163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.790279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.790307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.790397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.790424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.790533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.790560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.790681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.790708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.790787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.790813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.790959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.790992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.791108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.791134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.791289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.791316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.791436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.791462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.791600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.791626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.791715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.791742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.791850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.791877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.791972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.791998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.792083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.792111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.792197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.792225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.792324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.792365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.792458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.792488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.792596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.792625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.792709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.792738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.792856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.792885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.793002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.793029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.793151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.793180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.793327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.793367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.793483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.793512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.793632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.793659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.793744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.793770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.793881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.793908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.794005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.794033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.794174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.794201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.794290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.794317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.794431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.794458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.794544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.794573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.794670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.794699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.794815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.794844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.794971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.794998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.795084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.795111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.795225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.795251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.795389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.795417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.795532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.795559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.795668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.795694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.795826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.795871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.796000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.796040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.796191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.796221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.796312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.639 [2024-12-09 10:39:39.796340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.639 qpair failed and we were unable to recover it. 00:29:07.639 [2024-12-09 10:39:39.796475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.796520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.796646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.796686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.796821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.796849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.796964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.796991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.797068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.797095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.797234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.797262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.797373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.797399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.797517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.797543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.797628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.797654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.797743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.797771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.797914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.797941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.798028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.798055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.798173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.798200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.798293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.798320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.798426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.798452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.798569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.798597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.798691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.798732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.798884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.798913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.799032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.799062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.799183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.799211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.799325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.799353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.799491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.799518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.799630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.799658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.799773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.799801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.799886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.799915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.800068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.800095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.800225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.800253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.800341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.800368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.800523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.800564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.800680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.800708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.800886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.800943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.801060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.801093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.801209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.801237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.801353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.801383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.801472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.801498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.801640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.801667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.801781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.801808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.801947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.801974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.802110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.802157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.802291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.802331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.802480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.802509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.802593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.802625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.802713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.802739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.802828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.802858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.802974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.803002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.803127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.803160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.803273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.803300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.803410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.803437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.803557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.803585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.803679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.803706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.803823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.803850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.803957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.803983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.804098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.804125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.804247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.804276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.804371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.804398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.804526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.804554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.804640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.804669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.804782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.804822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.804967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.805007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.805106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.640 [2024-12-09 10:39:39.805135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.640 qpair failed and we were unable to recover it. 00:29:07.640 [2024-12-09 10:39:39.805261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.805288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.805405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.805432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.805546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.805573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.805661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.805688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.805774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.805801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.805886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.805914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.806024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.806051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.806168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.806195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.806314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.806345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.806460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.806487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.806597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.806623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.806739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.806766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.806852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.806879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.807002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.807041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.807153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.807182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.807295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.807322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.807440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.807467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.807664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.807690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.807801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.807828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.807949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.807977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.808069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.808096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.808237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.808278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.808375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.808403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.808554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.808581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.808736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.808800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.808901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.808929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.809085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.809125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.809278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.809306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.809425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.809451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.809558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.809585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.809669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.809696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.809777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.809804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.809893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.809921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.810031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.810071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.810229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.810271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.810374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.810402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.810524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.810551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.810637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.810664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.810747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.810775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.810912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.810939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.811040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.811081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.811175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.811205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.811293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.811320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.811464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.811491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.811607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.811634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.811714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.811741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.811957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.812011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.812157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.812188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.812332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.812364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.812537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.812590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.812678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.812706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.812858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.812912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.813029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.813056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.813166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.813194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.813319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.813358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.813469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.813497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.813608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.813635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.813717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.813743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.641 [2024-12-09 10:39:39.813883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.641 [2024-12-09 10:39:39.813909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.641 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.814053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.814080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.814216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.814243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.814356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.814383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.814476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.814505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.814708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.814767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.814884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.814911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.815040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.815068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.815184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.815213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.815309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.815336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.815478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.815506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.815620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.815647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.815734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.815761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.815847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.815875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.815957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.815986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.816123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.816174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.816299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.816328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.816449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.816477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.816585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.816612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.816729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.816755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.816874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.816901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.816988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.817014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.817161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.817189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.817282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.817309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.817448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.817475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.817600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.817655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.817777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.817805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.817917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.817944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.818036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.818064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.818165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.818206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.818325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.818359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.818450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.818477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.818590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.818617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.818708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.818735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.818850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.818878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.819007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.819035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.819151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.819178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.819266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.819292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.819377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.819403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.819524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.819550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.819666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.819691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.819776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.819801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.819896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.819922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.820021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.820062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.820218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.820247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.820390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.820418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.820516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.820545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.820641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.820668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.820811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.820838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.820970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.821012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.821148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.821177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.821263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.821291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.821392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.821421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.821537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.821565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.821714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.642 [2024-12-09 10:39:39.821742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.642 qpair failed and we were unable to recover it. 00:29:07.642 [2024-12-09 10:39:39.821861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.821888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.822004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.822030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.822127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.822164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.822249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.822277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.822363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.822390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.822498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.822525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.822617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.822644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.822765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.822793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.822906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.822933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.823019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.823046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.823163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.823191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.823309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.823336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.823451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.823478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.823590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.823617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.823703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.823729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.823804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.823836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.823954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.823981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.824087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.824114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.824232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.824261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.824342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.824368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.824504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.824530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.824645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.824671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.824786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.824811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.824885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.824910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.824983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.825008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.825093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.825119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.825206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.825231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.825369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.825394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.825491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.825517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.825659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.825685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.825806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.825832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.825945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.825972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.826061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.826091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.826218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.826246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.826336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.826364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.826474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.826502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.826645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.826672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.826786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.826813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.826923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.826951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.827053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.827093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.827204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.827233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.827345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.827370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.827541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.827599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.827714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.827773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.827884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.827910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.828019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.828044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.828136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.828168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.828279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.828306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.828425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.828450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.828569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.828595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.828694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.828719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.828864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.828889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.829004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.829030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.829125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.829158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.829248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.829274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.829418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.829445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.829529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.829555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.829666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.829693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.643 [2024-12-09 10:39:39.829802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.643 [2024-12-09 10:39:39.829827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.643 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.829941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.829967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.830074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.830100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.830199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.830227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.830315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.830341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.830459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.830500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.830596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.830624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.830703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.830730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.830847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.830875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.830955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.830983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.831098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.831126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.831260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.831289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.831401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.831441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.831537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.831564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.831701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.831726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.831847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.831872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.831959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.831984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.832096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.832122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.832252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.832292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.832381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.832410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.832539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.832566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.832681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.832708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.832848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.832875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.832990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.833018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.833098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.833133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.833243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.833274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.833370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.833397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.833478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.833503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.833614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.833640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.833716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.833741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.833881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.833907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.834022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.834048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.834164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.834191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.834269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.834294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.834381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.834407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.834519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.834545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.834633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.834660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.834769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.834795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.834926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.834954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.835036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.835064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.835153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.835180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.835271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.835299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.835387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.835414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.835527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.835553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.835633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.835660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.835772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.835799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.835913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.835942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.836034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.836060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.836168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.836195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.836312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.836338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.836484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.836538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.836721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.836775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.836883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.836908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.836998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.837023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.837112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.837137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.837230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.837256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.837368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.837393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.837510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.837536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.837676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.837701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.837813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.837838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.837931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.644 [2024-12-09 10:39:39.837957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.644 qpair failed and we were unable to recover it. 00:29:07.644 [2024-12-09 10:39:39.838043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.838069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.838185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.838213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.838330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.838356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.838450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.838481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.838563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.838588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.838700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.838727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.838813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.838839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.838983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.839010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.839127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.839164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.839257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.839281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.839359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.839383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.839521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.839547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.839638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.839662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.839781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.839807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.839896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.839921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.840031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.840058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.840170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.840198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.840285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.840311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.840399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.840425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.840507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.840533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.840653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.840679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.840816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.840842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.840955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.840982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.841069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.841094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.841223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.841249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.841365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.841391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.841508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.841535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.841627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.841653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.841761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.841786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.841884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.841910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.842046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cf30 is same with the state(6) to be set 00:29:07.645 [2024-12-09 10:39:39.842195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.842236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.842358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.842387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.842504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.842533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.842646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.842673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.842770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.842798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.842902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.842929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.843051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.843078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.843194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.843221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.843338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.843364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.843478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.843504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.843592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.843618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.843715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.843742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.843858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.843883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.844017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.844057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.844196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.844236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.844381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.844410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.844500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.844527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.844645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.844672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.844786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.844812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.844926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.844953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.845061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.845087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.845225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.845251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.845336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.845362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.845478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.645 [2024-12-09 10:39:39.845505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.645 qpair failed and we were unable to recover it. 00:29:07.645 [2024-12-09 10:39:39.845623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.845649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.845761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.845788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.845891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.845940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.846093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.846123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.846222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.846249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.846342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.846370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.846463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.846491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.846578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.846607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.846720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.846746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.846823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.846849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.846958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.846984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.847147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.847196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.847296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.847326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.847408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.847435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.847548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.847576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.847691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.847718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.847812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.847839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.847947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.847974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.848079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.848106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.848229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.848258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.848375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.848402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.848509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.848536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.848647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.848674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.848755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.848783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.848918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.848959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.849104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.849133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.849256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.849285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.849364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.849392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.849501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.849528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.849687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.849745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.849838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.849867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.849958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.849986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.850126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.850160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.850281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.850308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.850402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.850429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.850548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.850575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.850658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.850685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.850802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.850828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.850966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.850993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.851088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.851113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.851230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.851258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.851348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.851373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.851456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.851483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.851577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.851603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.851745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.851771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.851910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.851936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.852053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.852079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.852179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.852209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.852331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.852360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.852471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.852498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.852579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.852606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.852767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.852823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.852933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.852960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.853041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.853069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.853185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.853226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.853364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.853403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.853556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.853584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.853697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.853728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.853845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.853871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.646 qpair failed and we were unable to recover it. 00:29:07.646 [2024-12-09 10:39:39.853988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.646 [2024-12-09 10:39:39.854016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.854156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.854183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.854274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.854301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.854441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.854468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.854612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.854639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.854731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.854756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.854877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.854906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.855044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.855073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.855178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.855218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.855345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.855374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.855455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.855488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.855610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.855666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.855780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.855808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.855898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.855923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.856015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.856043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.856161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.856189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.856283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.856313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.856401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.856428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.856524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.856551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.856679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.856734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.856922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.856980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.857074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.857105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.857257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.857285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.857375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.857400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.857541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.857590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.857731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.857797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.857877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.857901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.858045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.858074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.858197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.858223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.858304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.858333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.858422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.858448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.858566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.858593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.858734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.858794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.858886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.858912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.858996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.859023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.859108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.859135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.859231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.859256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.859359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.859400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.859539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.859579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.859702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.859731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.859824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.859853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.859954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.859981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.860069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.860096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.860221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.860249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.860366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.860393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.860490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.860517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.860602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.860627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.860707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.860738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.860845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.860886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.861032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.861061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.861154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.861181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.861266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.861293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.861385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.861412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.861500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.861526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.861613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.861644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.861735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.861761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.861866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.861892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.861973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.861998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.862098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.862124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.862226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.862255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.647 qpair failed and we were unable to recover it. 00:29:07.647 [2024-12-09 10:39:39.862342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.647 [2024-12-09 10:39:39.862366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.862477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.862513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.862601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.862627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.862783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.862809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.862928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.862957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.863068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.863096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.863258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.863286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.863377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.863405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.863491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.863519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.863608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.863636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.863763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.863816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.863897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.863922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.864015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.864042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.864119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.864153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.864245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.864271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.864348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.864373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.864457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.864483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.864591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.864621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.864709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.864737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.864842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.864871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.864961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.864990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.865105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.865133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.865235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.865264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.865359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.865386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.865497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.865524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.865611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.865638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.865750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.865777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.865889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.865918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.866031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.866057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.866145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.866174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.866287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.866314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.866530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.866595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.866724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.866795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.866881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.866909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.867017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.867043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.867160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.867188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.867264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.867288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.867369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.867395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.867478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.867505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.867585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.867610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.867727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.867753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.867832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.867858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.867943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.867969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.868090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.868120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.868234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.868274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.868375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.868403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.868497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.868524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.868615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.868641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.868729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.868756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.868883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.868911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.869023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.869049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.869169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.869200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.869316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.869343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.869485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.869540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.869724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.648 [2024-12-09 10:39:39.869752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.648 qpair failed and we were unable to recover it. 00:29:07.648 [2024-12-09 10:39:39.869860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.869886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.869970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.869999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.870114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.870149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.870269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.870296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.870381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.870407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.870529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.870556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.870652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.870680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.870772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.870800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.870964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.871005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.871103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.871131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.871270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.871298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.871392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.871418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.871510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.871536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.871716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.871743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.871888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.871942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.872021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.872048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.872165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.872194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.872279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.872307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.872400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.872427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.872510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.872535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.872618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.872644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.872761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.872788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.872867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.872894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.872979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.873005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.873082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.873109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.873208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.873238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.873324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.873349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.873437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.873464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.873585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.873612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.873696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.873728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.873824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.873851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.873964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.873992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.874083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.874112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.874268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.874301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.874424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.874452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.874550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.874577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.874676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.874703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.874872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.874930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.875009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.875036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.875129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.875169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.875286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.875314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.875410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.875451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.875626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.875684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.875907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.875962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.876047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.876074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.876188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.876216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.876301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.876326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.876484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.876545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.876626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.876652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.876801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.876874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.876996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.877024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.877131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.877185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.877328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.877357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.877442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.877470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.877585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.877612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.877725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.877797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.877887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.877919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.878022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.878063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.878185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.878215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.878308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.878336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.878447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.878473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.649 [2024-12-09 10:39:39.878588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.649 [2024-12-09 10:39:39.878617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.649 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.878735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.878762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.878855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.878884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.878973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.879000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.879087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.879115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.879206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.879232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.879345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.879373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.879463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.879490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.879573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.879601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.879694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.879721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.879811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.879842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.879958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.879986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.880074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.880101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.880197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.880225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.880312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.880339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.880425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.880452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.880530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.880554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.880636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.880662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.880771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.880796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.880949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.880990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.881082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.881110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.881216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.881244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.881325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.881354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.881482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.881512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.881630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.881658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.881777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.881804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.881885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.881912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.882004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.882034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.882121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.882157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.882258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.882285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.882377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.882404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.882523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.882549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.882639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.882665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.882756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.882784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.882866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.882893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.883034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.883067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.883166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.883195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.883290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.883317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.883436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.883465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.883547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.883573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.883687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.883715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.883817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.883857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.883986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.884015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.884158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.884186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.884279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.884306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.884398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.884425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.884534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.884560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.884718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.884746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.884864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.884928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.885057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.885085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.885205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.885233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.885329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.885355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.885444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.885471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.885554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.885581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.885677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.885706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.885801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.885842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.885965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.885993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.886123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.886159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.650 qpair failed and we were unable to recover it. 00:29:07.650 [2024-12-09 10:39:39.886274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.650 [2024-12-09 10:39:39.886301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.886382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.886407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.886521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.886547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.886628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.886654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.886736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.886768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.886860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.886886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.886984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.887010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.887104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.887134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.887240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.887280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.887374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.887403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.887515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.887543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.887629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.887656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.887741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.887768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.887853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.887880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.887962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.887988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.888075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.888105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.888230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.888259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.888370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.888397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.888517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.888544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.888658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.888684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.888774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.888804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.888895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.888923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.889047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.889087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.889196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.889225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.889313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.889341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.889471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.889532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.889686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.889756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.889844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.889873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.889971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.890000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.890086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.890114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.890203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.890229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.890318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.890347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.890429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.890456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.890548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.890575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.890693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.890720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.890814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.890840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.890928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.890956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.891036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.891063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.891177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.891205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.891336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.891376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.891479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.891507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.891595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.891622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.891708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.891735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.891841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.891867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.891980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.892014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.892106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.892133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.892228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.892254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.892347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.892374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.892458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.892485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.892605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.892644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.892764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.892793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.892875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.892902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.893019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.893046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.893128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.893171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.893287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.893314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.893391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.893417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.893516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.893543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.893633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.893660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.651 qpair failed and we were unable to recover it. 00:29:07.651 [2024-12-09 10:39:39.893863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.651 [2024-12-09 10:39:39.893891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.894005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.894032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.894116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.894149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.894243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.894269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.894354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.894381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.894465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.894491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.894578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.894603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.894757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.894783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.894868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.894894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.894994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.895023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.895112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.895147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.895270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.895297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.895377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.895402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.895484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.895511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.895599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.895627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.895827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.895855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.895963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.896003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.896130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.896166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.896256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.896285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.896399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.896426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.896540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.896567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.896690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.896717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.896805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.896833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.896939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.896978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.897094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.897122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.897215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.897243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.897331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.897358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.897445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.897472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.897559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.897587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.897706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.897732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.897849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.897879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.897993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.898020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.898095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.898121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.898250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.898276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.898360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.898386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.898501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.898527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.898613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.898643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.898736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.898764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.898880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.898907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.898988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.899014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.899124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.899161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.899257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.899284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.899366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.899392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.899481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.899508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.899597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.899624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.899708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.899735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.899817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.899847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.900009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.900049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.900145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.900174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.900267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.900294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.900388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.900415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.900534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.900593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.900742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.900803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.900895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.900922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.901037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.901063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.901163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.901190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.901278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.901306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.901392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.901420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.901516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.901543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.901633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.901660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.652 [2024-12-09 10:39:39.901749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.652 [2024-12-09 10:39:39.901776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.652 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.901875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.901902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.901989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.902016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.902105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.902132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.902244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.902284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.902379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.902407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.902525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.902552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.902639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.902665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.902776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.902802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.903011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.903052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.903267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.903298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.903391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.903418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.903509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.903536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.903619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.903645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.903726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.903752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.903912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.903967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.904048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.904074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.904160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.904186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.904302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.904329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.904436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.904463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.904573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.904605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.904700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.904727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.904872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.904901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.904991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.905018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.905103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.905132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.905248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.905274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.905363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.905390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.905501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.905527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.905644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.905670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.905782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.905808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.905919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.905945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.906027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.906055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.906134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.906168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.906277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.906304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.906395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.906421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.906513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.906539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.906624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.906650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.906732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.906757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.906875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.906900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.907022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.907048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.907164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.907191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.907278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.907304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.907429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.907458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.907581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.907610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.907728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.907754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.907838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.907865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.907958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.907984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.908090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.908122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.908239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.908267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.908357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.908383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.908492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.908519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.908610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.908636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.908724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.908751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.908835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.908862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.908974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.909000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.909084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.909111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.653 qpair failed and we were unable to recover it. 00:29:07.653 [2024-12-09 10:39:39.909241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.653 [2024-12-09 10:39:39.909281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.909379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.909408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.909520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.909548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.909644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.909672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.909768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.909796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.909890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.909916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.910046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.910072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.910157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.910181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.910289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.910315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.910399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.910423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.910553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.910579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.910661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.910686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.910770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.910798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.910940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.910969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.911069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.911109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.911219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.911248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.911341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.911369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.911522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.911570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.911724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.911788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.911912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.911940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.912079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.912106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.912202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.912230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.912319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.912345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.912436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.912463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.912628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.912681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.912820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.912846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.912942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.912971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.913061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.913090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.913216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.913245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.913335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.913362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.913453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.913481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.913564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.913591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.913754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.913806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.913888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.913915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.913998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.914024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.914137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.914172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.914262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.914288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.914380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.914405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.914524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.914549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.914632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.914658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.914759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.914788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.914905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.914932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.915018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.915046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.915162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.915190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.915319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.915348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.915447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.915486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.915575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.915604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.915690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.915717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.915796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.915822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.915961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.915989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.916101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.916129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.916229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.916257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.916353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.916380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.916466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.916493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.916614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.916643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.916731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.916759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.916845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.916873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.916989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.917017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.917136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.654 [2024-12-09 10:39:39.917179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.654 qpair failed and we were unable to recover it. 00:29:07.654 [2024-12-09 10:39:39.917266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.917294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.917412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.917441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.917584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.917612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.917736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.917764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.917877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.917905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.918023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.918049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.918135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.918169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.918258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.918284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.918400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.918428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.918516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.918542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.918626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.918653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.918757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.918797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.918927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.918956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.919062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.919102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.919237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.919267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.919378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.919405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.919519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.919547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.919660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.919715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.919806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.919836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.919923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.919951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.920035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.920062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.920184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.920211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.920301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.920329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.920421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.920449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.920533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.920559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.920646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.920675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.920796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.920825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.920973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.921002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.921118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.921150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.921266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.921293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.921379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.921406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.921562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.921614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.921703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.921732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.921886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.921940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.922039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.922068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.922157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.922184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.922323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.922352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.922435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.922461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.922548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.922577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.922696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.922724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.922820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.922848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.922940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.922965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.923079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.923106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.923197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.923224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.923314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.923341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.923424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.923450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.923543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.923569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.923657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.923682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.923777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.923804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.923917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.923943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.924065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.924105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.924201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.924230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.924318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.924347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.924461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.924488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.924606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.924633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.924759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.924799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.924893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.924920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.925007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.925032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.925115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.655 [2024-12-09 10:39:39.925152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.655 qpair failed and we were unable to recover it. 00:29:07.655 [2024-12-09 10:39:39.925273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.925298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.925379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.925406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.925489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.925514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.925652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.925678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.925763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.925789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.925910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.925939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.926055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.926082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.926179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.926216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.926331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.926359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.926513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.926564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.926706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.926750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.926907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.926965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.927074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.927101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.927196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.927223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.927336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.927362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.927452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.927478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.927592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.927618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.927752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.927778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.927889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.927916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.927998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.928022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.928113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.928146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.928293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.928318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.928412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.928442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.928530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.928557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.928648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.928677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.928762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.928788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.928885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.928912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.928989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.929016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.929136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.929176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.929275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.929315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.929474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.929513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.929600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.929628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.929757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.929785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.929929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.929981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.930068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.930096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.930198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.930226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.930312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.930338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.930423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.930449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.930527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.930552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.930665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.930693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.930779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.930805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.930897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.930926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.931018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.931048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.931164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.931193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.931291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.931318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.931403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.931430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.931513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.931539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.931631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.931664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.931782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.931808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.931928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.931969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.932061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.932089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.932213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.932243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.932356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.932383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.932466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.932493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.932629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.656 [2024-12-09 10:39:39.932681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.656 qpair failed and we were unable to recover it. 00:29:07.656 [2024-12-09 10:39:39.932801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.932852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.932944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.932973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.933068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.933097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.933194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.933223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.933336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.933363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.933447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.933474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.933568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.933595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.933709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.933736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.933828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.933855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.933944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.933973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.934062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.934089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.934210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.934240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.934327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.934353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.934430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.934456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.934546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.934573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.934688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.934716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.934798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.934827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.934959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.934999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.935095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.935123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.935228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.935261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.935406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.935433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.935543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.935568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.935657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.935683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.935771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.935797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.935918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.935946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.936079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.936118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.936221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.936250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.936340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.936367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.936469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.936496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.936586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.936613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.936730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.936758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.936875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.936904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.936997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.937028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.937162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.937191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.937279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.937304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.937391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.937417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.937499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.937525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.937617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.937644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.937758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.937787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.937874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.937902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.938019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.938047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.938191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.938219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.938328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.938355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.938447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.938474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.938562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.938591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.938674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.938699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.938817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.938844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.938935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.938963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.939060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.939100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.939204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.939233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.939341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.939368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.939507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.939562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.939758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.939809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.939991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.940031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.940152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.940181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.940273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.940301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.940387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.940417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.940530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.940558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.940666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.940719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.657 qpair failed and we were unable to recover it. 00:29:07.657 [2024-12-09 10:39:39.940809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.657 [2024-12-09 10:39:39.940842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.940972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.941012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.941134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.941170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.941286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.941313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.941401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.941428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.941525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.941552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.941723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.941773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.941881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.941932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.942015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.942042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.942172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.942213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.942308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.942337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.942425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.942452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.942562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.942589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.942701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.942728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.942837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.942863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.942945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.942973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.943090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.943117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.943215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.943255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.943353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.943381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.943473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.943500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.943612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.943639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.943748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.943775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.943866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.943893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.943982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.944009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.944100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.944128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.944229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.944255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.944367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.944394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.944486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.944518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.944608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.944634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.944729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.944757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.944848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.944875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.944985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.945012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.945102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.945130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.945228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.945254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.945361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.945415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.945510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.945536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.945623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.945649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.945728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.945754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.945867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.945895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.946011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.946037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.946123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.946156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.946279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.946305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.946379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.946404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.946503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.946529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.946620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.946648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.946740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.946769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.946871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.946912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.947015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.947042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.947128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.947161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.947252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.947280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.947365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.947391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.947479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.947507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.947597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.947624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.947747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.947773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.947863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.947891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.948031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.948057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.948134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.948166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.948277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.948303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.948449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.948497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.948599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.658 [2024-12-09 10:39:39.948651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.658 qpair failed and we were unable to recover it. 00:29:07.658 [2024-12-09 10:39:39.948763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.948791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.948898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.948937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.949064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.949093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.949214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.949243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.949364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.949414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.949563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.949610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.949754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.949799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.949881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.949913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.950010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.950042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.950161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.950189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.950280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.950306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.950416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.950442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.950563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.950588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.950671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.950699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.950796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.950824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.950935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.950965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.951081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.951109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.951232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.951259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.951350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.951378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.951469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.951498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.951616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.951644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.951771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.951799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.951905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.951932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.952020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.952049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.952151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.952180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.952271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.952298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.952385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.952412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.952510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.952539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.952623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.952651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.952763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.952790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.952906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.952933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.953033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.953060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.953158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.953186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.953268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.953294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.953374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.953405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.953522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.953548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.953634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.953663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.953747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.953774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.953888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.953914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.954005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.954032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.954123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.954158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.954273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.954302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.954393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.954420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.954556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.954583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.954668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.954694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.954805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.954833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.954915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.954941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.955030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.955058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.955161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.955189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.955278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.955305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.955418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.955445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.955535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.955562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.955675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.955703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.955818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.955845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.659 qpair failed and we were unable to recover it. 00:29:07.659 [2024-12-09 10:39:39.955930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.659 [2024-12-09 10:39:39.955957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.956042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.956069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.956159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.956187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.956279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.956305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.956419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.956446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.956589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.956615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.956702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.956728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.956826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.956865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.956966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.956995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.957147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.957177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.957294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.957321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.957413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.957440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.957552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.957579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.957724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.957775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.957895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.957924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.958017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.958043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.958155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.958184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.958295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.958322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.958407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.958434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.958544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.958570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.958647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.958676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.958777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.958817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.958905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.958933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.959047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.959073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.959163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.959190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.959279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.959304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.959399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.959428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.959529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.959555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.959640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.959667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.959756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.959783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.959870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.959897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.960004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.960030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.960119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.960153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.960242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.960268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.960433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.960474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.960571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.960599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.960692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.960719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.960834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.960860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.960973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.961001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.961080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.961107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.961211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.961240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.961327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.961353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.961505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.961545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.961641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.961671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.961769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.961800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.961891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.961920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.962034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.962062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.962177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.962210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.962347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.962374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.962470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.962497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.962594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.962621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.962708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.962736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.962821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.962847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.962954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.962979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.963074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.963100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.963240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.963281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.963378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.963406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.963485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.963512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.963624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.963651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.963729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.660 [2024-12-09 10:39:39.963755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.660 qpair failed and we were unable to recover it. 00:29:07.660 [2024-12-09 10:39:39.963840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.963865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.963953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.963981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.964073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.964099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.964247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.964275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.964366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.964393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.964477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.964504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.964591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.964616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.964758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.964786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.964913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.964953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.965046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.965075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.965190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.965218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.965308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.965335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.965441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.965469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.965583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.965611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.965755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.965783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.965881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.965908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.965997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.966024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.966114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.966148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.966235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.966261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.966353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.966378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.966468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.966494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.966569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.966594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.966676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.966701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.966787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.966813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.966927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.966952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.967039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.967065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.967162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.967204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.967330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.967359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.967446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.967474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.967560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.967587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.967698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.967725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.967809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.967836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.967914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.967941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.968039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.968078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.968177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.968206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.968303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.968330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.968411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.968437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.968546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.968572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.968662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.968691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.968775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.968801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.968889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.968919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.969010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.969038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.969160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.969190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.969274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.969302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.969391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.969418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.969576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.969628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.969739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.969792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.969932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.969987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.970080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.970109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.970233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.970270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.970355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.970381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.970472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.970500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.970576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.970604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.970693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.970721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.970834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.970891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.971005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.971032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.971113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.971146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.971269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.971295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.971378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.971405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.971511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.661 [2024-12-09 10:39:39.971566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.661 qpair failed and we were unable to recover it. 00:29:07.661 [2024-12-09 10:39:39.971672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.971698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.971791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.971819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.971913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.971939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.972024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.972051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.972133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.972165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.972281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.972307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.972398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.972424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.972519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.972546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.972648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.972689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.972792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.972820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.972908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.972936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.973047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.973073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.973190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.973217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.973307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.973334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.973448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.973474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.973552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.973578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.973673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.973700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.973816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.973843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.973956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.973982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.974065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.974092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.974208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.974237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.974332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.974364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.974476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.974503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.974633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.974660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.974750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.974776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.974897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.974923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.975009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.975037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.975148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.975189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.975295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.975334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.975424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.975452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.975540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.975567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.975681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.975707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.975814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.975840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.975928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.975956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.976047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.976074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.976191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.976219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.976298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.976324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.976411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.976438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.976549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.976576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.976715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.976742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.976837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.976877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.977018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.977058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.977153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.977181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.977297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.977325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.977418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.977445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.977528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.977554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.977664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.977691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.977775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.977801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.977888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.977919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.978008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.978036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.978154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.978182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.978265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.978291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.978367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.978393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.978534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.978583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.978691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.978743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.662 [2024-12-09 10:39:39.978839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.662 [2024-12-09 10:39:39.978865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.662 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.978966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.979007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.979103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.979131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.979267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.979297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.979385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.979412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.979527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.979578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.979698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.979751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.979839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.979867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.979987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.980015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.980137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.980171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.980289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.980317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.980415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.980442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.980555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.980581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.980696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.980722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.980812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.980838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.980934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.980963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.981102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.981129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.981230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.981257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.981367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.981394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.981473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.981500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.981620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.981646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.981758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.981786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.981868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.981896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.981993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.982034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.982127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.982165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.982261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.982290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.982427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.982477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.982624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.982652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.982768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.982794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.982881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.982909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.982992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.983018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.983113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.983147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.983237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.983265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.983345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.983377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.983460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.983487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.983607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.983635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.983748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.983774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.983868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.983895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.983978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.984004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.984084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.984114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.984242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.984270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.984352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.984379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.984518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.984569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.984722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.984779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.984902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.984931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.985036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.985064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.985158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.985185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.985279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.985306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.985392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.985419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.985554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.985605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.985709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.985759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.985871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.985899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.985976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.986001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.986092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.986122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.986230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.986258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.986355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.986381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.986460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.986487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.986571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.986597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.986678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.986705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.986799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.986827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.986942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.663 [2024-12-09 10:39:39.986974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.663 qpair failed and we were unable to recover it. 00:29:07.663 [2024-12-09 10:39:39.987056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.987082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.987162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.987188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.987305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.987332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.987428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.987467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.987581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.987609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.987694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.987720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.987839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.987867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.987971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.988011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.988121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.988168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.988264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.988292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.988380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.988407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.988524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.988551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.988633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.988660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.988746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.988772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.988886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.988913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.988994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.989022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.989111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.989146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.989236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.989262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.989351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.989378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.989469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.989496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.989611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.989637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.989745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.989785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.989901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.989930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.990023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.990054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.990152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.990181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.990269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.990296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.990395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.990422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.990513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.990540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.990621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.990649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.990748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.990777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.990890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.990918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.991011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.991042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.991164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.991193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.991286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.991315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.991412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.991440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.991529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.991556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.991665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.991692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.991783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.991811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.991899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.991928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.992029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.992075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.992204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.992232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.992320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.992348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.992495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.992522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.992614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.992641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.992759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.992788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.992902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.992931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.993034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.993074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.993169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.993197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.993281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.993306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.993423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.993451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.993534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.993560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.993672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.993699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.993807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.993833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.993945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.993972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.994088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.994117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.994218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.994245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.994343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.994370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.994449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.994476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.994570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.664 [2024-12-09 10:39:39.994597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.664 qpair failed and we were unable to recover it. 00:29:07.664 [2024-12-09 10:39:39.994685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.994716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.994810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.994838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.994937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.994977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.995116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.995153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.995269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.995296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.995381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.995408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.995489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.995516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.995656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.995705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.995818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.995869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.995952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.995980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.996089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.996116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.996238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.996266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.996382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.996408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.996499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.996526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.996671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.996721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.996816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.996844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.996937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.996964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.997077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.997104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.997195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.997223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.997313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.997339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.997428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.997461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.997542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.997569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.997670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.997696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.997787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.997814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.997898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.997925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.998017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.998057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.998165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.998195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.998296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.998337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.998429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.998457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.998569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.998596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.998678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.998706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.998789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.998816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.998900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.998927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.999020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.999047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.999146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.999174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.999260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.999291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.999370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.999397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.999480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.999506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.999587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.999614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.999732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.999764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:39.999863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:39.999889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.000007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.000033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.000129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.000175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.000282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.000312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.000395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.000423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.000526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.000553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.000639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.000667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.000757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.000791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.000887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.000915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.001017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.001048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.001147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.001176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.001268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.001295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.001384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.001409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.665 qpair failed and we were unable to recover it. 00:29:07.665 [2024-12-09 10:39:40.001489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.665 [2024-12-09 10:39:40.001515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.001627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.001653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.001771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.001797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.001897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.001937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.002033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.002062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.002163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.002192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.002280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.002308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.002396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.002422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.002519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.002547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.002654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.002704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.002812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.002839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.002931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.002958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.003041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.003068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.003155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.003182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.003273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.003300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.003377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.003405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.003489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.003516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.003608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.003635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.003752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.003780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.003892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.003932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.004025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.004055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.004153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.004183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.004278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.004305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.004387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.004414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.004526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.004553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.004665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.004692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.004780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.004807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.004897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.004924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.005015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.005041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.005124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.005157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.005250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.005279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.005372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.005401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.005516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.005543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.005628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.005655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.005742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.005776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.005885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.005912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.006018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.006044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.006174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.666 [2024-12-09 10:39:40.006201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.666 qpair failed and we were unable to recover it. 00:29:07.666 [2024-12-09 10:39:40.006278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.006305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.006395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.006423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.006537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.006564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.006687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.006714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.006814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.006842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.006961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.006996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.007117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.007176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.007287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.007323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.007437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.007477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.007599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.007638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.007795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.007857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.007980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.008034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.008196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.008240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.008384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.008423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.008556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.008596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.008736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.008776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.008892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.008928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.009045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.009074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.009174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.009202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.009291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.009318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.009408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.009436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.009522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.009549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.009641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.009670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.009801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.009834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.009929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.009964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.010058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.010091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.010191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.010220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.010312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.010340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.010430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.010457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.010544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.010571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.010661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.010689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.010773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.010800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.010926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.010953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.011040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.011069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.011163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.011194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.011282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.011308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.011397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.011430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.011600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.011653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.011794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.011847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.011975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.012003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.012097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.012126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.012236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.012266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.012356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.012383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.012493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.012522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.667 qpair failed and we were unable to recover it. 00:29:07.667 [2024-12-09 10:39:40.012604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.667 [2024-12-09 10:39:40.012633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.012752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.012779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.012899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.012926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.013041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.013069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.013210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.013239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.013327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.013353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.013446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.013472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.013592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.013619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.013704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.013730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.013812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.013837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.013920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.013947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.014054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.014080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.014170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.014197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.014279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.014305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.014382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.014409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.014483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.014509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.014584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.014610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.014729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.014756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.014844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.014873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.015006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.015047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.015129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.015166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.015274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.015301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.015389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.015416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.015532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.015558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.015704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.015754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.015832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.015858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.015980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.016014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.016098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.016126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.016259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.016285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.016367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.016394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.016481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.016508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.016600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.016627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.016701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.016727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.016819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.016846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.016932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.016967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.017059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.017084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.017172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.017199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.018495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.018539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.018712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.018741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.018863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.018891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.019016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.019046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.019148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.019186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.019283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.019309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.019399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.019425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.668 [2024-12-09 10:39:40.019504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.668 [2024-12-09 10:39:40.019530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.668 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.019609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.019633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.019725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.019752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.019868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.019894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.020010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.020036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.020185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.020213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.020330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.020356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.020449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.020475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.020562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.020589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.020701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.020728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.020843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.020873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.020957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.020984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.021078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.021104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.021247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.021289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.021398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.021438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.021536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.021571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.021659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.021687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.021840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.021868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.021948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.021974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.022060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.022086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.022205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.022234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.022342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.022370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.022464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.022491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.022609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.022636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.022747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.022774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.022914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.022941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.023057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.023083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.023205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.023233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.023318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.023345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.023438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.023465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.023573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.023600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.023682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.023710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.023807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.023834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.023958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.023999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.024131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.024173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.024271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.024300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.024427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.024455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.024549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.024577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.024667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.024696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.024814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.669 [2024-12-09 10:39:40.024841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.669 qpair failed and we were unable to recover it. 00:29:07.669 [2024-12-09 10:39:40.024967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.025008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.025115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.025168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.025309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.025338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.025462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.025490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.025601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.025648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.025754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.025803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.025888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.025916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.025999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.026026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.026165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.026197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.026314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.026341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.026465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.026492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.026579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.026605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.026726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.026755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.026878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.026905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.027027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.027055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.027195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.027229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.027318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.027346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.027439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.027466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.027605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.027633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.027740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.027768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.027881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.027919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.028045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.028071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.028191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.028219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.028334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.028362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.028498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.028538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.028642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.028670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.028783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.028808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.028927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.028953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.029072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.029097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.029235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.029263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.029374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.029400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.029492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.029518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.029639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.029664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.029756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.029786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.029885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.029913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.030026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.030054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.030151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.030180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.030282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.030310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.030395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.030422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.030540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.030568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.030663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.030688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.030828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.030855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.030952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.030982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.031104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.670 [2024-12-09 10:39:40.031131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.670 qpair failed and we were unable to recover it. 00:29:07.670 [2024-12-09 10:39:40.031231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.031258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.031375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.031414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.031556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.031584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.031699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.031726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.031822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.031850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.031994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.032022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.032129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.032175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.032315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.032344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.032443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.032471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.032559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.032587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.032666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.032693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.032821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.032862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.032962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.032990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.033082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.033109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.033214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.033244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.033332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.033360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.033493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.033528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.033610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.033638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.033731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.033758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.033840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.033868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.033961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.033988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.034112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.034151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.034308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.034335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.034431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.034458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.034553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.034581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.035414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.035448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.035599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.035625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.035728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.035755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.035876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.035902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.036025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.036051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.036130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.036165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.036263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.036289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.036390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.036429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.036552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.036580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.036696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.036723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.036843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.036869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.036988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.037014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.037124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.037157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.037285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.037316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.037416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.037442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.037536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.037561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.037674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.037700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.037819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.037846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.037940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.671 [2024-12-09 10:39:40.037979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.671 qpair failed and we were unable to recover it. 00:29:07.671 [2024-12-09 10:39:40.038105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.038134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.038236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.038262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.038378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.038413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.042211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.042251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.042608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.042640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.042742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.042770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.042865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.042893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.042995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.043022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.043153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.043191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.043283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.043310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.043398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.043424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.043534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.043561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.043677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.043704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.043823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.043849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.043957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.043983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.044065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.044091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.044205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.044232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.044331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.044357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.044476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.044503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.044589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.044615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.044756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.044782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.044905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.044933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.045028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.045068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.045170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.045197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.045309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.045336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.045438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.045466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.045590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.045615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.045735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.045760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.045857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.045885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.046003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.046029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.046120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.046154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.046260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.046287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.046413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.046440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.046579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.046605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.046718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.046748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.046841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.046867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.047005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.047045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.047168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.047196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.047292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.047317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.047439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.047464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.047573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.047597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.047716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.047741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.047879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.047904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.048027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.672 [2024-12-09 10:39:40.048067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.672 qpair failed and we were unable to recover it. 00:29:07.672 [2024-12-09 10:39:40.048167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.048196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.048277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.048303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.048430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.048458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.048538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.048564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.048659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.048686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.048778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.048804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.048944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.048970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.049058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.049087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.049181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.049216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.049337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.049363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.049480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.049506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.049622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.049648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.049779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.049805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.049922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.049948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.050040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.050065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.050191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.050217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.050326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.050351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.050476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.050507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.050628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.050653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.050736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.050762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.050874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.050898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.051011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.051037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.051130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.051165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.051263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.051289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.051371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.051396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.051515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.051541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.051626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.051651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.051735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.051761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.051851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.051877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.051958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.051984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.052075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.052100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.052207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.052232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.052319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.052344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.052432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.052457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.052576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.052602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.052720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.052744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.052828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.052853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.052936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.052961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.053042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.673 [2024-12-09 10:39:40.053068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.673 qpair failed and we were unable to recover it. 00:29:07.673 [2024-12-09 10:39:40.053172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.053197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.053299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.053337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.053434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.053462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.053574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.053600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.053706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.053732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.053821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.053850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.053942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.053969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.054059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.054085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.054208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.054234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.054316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.054342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.054440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.054466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.054553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.054579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.054660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.054687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.054798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.054824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.054908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.054934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.055025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.055051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.055148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.055175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.055262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.055288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.055397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.055428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.055523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.055549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.055643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.055670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.055780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.055806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.055903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.055931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.056016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.056042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.056151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.056178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.056297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.056322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.056405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.056430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.056521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.056547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.056631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.056656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.056740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.056771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.056895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.056920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.057031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.057057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.057160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.057187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.057302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.057327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.057452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.057477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.057567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.057593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.057677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.057703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.057813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.057840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.057975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.058015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.058107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.058135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.058270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.058296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.058384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.058411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.058493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.058520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.058612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.058638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.674 qpair failed and we were unable to recover it. 00:29:07.674 [2024-12-09 10:39:40.058754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.674 [2024-12-09 10:39:40.058781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.058888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.058940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.059046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.059074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.059167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.059196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.059286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.059312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.059437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.059477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.059573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.059600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.059714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.059740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.059819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.059845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.059944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.059970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.060066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.060094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.060195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.060221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.060332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.060359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.060460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.060485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.060565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.060592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.060719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.060748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.060837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.060864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.060958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.060987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.061075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.061102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.061203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.061230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.061320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.061346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.061438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.061466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.061550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.061575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.061682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.061717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.061863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.061898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.062020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.062048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.062164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.062192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.062270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.062296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.062428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.062456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.062565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.062614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.062718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.062768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.675 [2024-12-09 10:39:40.062870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.675 [2024-12-09 10:39:40.062904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.675 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.063023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.063050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.063259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.063287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.063374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.063400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.063492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.063517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.063599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.063624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.063737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.063763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.063860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.063886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.063969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.063996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.064105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.064132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.064232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.064265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.064356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.064382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.064463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.064490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.064574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.064600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.064688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.064714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.064808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.064834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.064920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.064946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.065059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.065085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.065171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.065198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.065287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.065313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.065399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.065425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.065505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.065531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.065615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.065641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.065748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.065774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.065870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.065899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.066007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.066046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.066160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.066191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.066284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.948 [2024-12-09 10:39:40.066311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.948 qpair failed and we were unable to recover it. 00:29:07.948 [2024-12-09 10:39:40.066398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.066424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.066545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.066571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.066653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.066680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.066771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.066800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.066934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.066963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.067051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.067077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.067190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.067216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.067298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.067322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.067408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.067434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.067523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.067549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.067638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.067663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.067754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.067782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.067868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.067895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.067996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.068023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.068119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.068152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.068279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.068305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.068396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.068422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.068538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.068564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.068649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.068676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.068770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.068795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.068908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.068935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.069040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.069068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.069160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.069188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.069288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.069315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.069417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.069444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.069533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.069560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.069647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.069674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.069760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.069786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.069916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.069945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.070062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.070090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.070184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.070213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.070293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.070319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.070403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.070430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.070544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.070570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.070679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.070716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.070838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.070873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.070990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.071018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.071132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.071165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.071257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.071283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.071401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.071427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.071532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.071579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.071665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.071692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.071808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.071837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.071956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.071983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.072070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.072094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.949 [2024-12-09 10:39:40.072194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.949 [2024-12-09 10:39:40.072221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.949 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.072331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.072355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.072445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.072471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.072566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.072591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.072703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.072733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.072818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.072844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.072960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.072987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.073074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.073102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.073218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.073257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.073344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.073371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.073482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.073529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.073662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.073709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.073794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.073821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.073902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.073927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.074022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.074049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.074134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.074172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.074255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.074282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.074366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.074392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.074513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.074547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.074662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.074690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.074778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.074804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.074894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.074932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.075064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.075090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.075189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.075217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.075301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.075327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.075454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.075501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.075636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.075668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.075787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.075821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.075959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.075984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.076094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.076120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.076214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.076241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.076347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.076394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.076557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.076590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.076742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.076776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.076950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.076985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.077148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.077183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.077298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.077324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.077414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.077457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.077557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.077591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.077754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.077787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.077908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.077957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.078068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.078094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.078200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.078227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.078310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.078337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.078436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.078467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.078552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.078578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.078655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.950 [2024-12-09 10:39:40.078681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.950 qpair failed and we were unable to recover it. 00:29:07.950 [2024-12-09 10:39:40.078797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.078825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.078979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.079018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.079126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.079166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.079251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.079277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.079364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.079390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.079468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.079494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.079633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.079659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.079751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.079778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.079865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.079891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.079973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.080004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.080123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.080162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.080262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.080289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.080376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.080403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.080496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.080521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.080660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.080685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.080779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.080805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.080889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.080916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.080996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.081022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.081130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.081164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.081285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.081310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.081387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.081413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.081528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.081574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.081690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.081718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.081855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.081894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.082017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.082049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.082150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.082177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.082290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.082316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.082438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.082463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.082551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.082577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.082665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.082690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.082822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.082851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.082946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.082982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.083098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.083124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.083233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.083259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.083335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.083360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.083473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.083498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.083618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.083645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.083726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.083751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.083844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.083870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.083986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.084014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.084099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.084125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.084253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.084280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.084367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.084393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.084472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.084497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.084577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.951 [2024-12-09 10:39:40.084603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.951 qpair failed and we were unable to recover it. 00:29:07.951 [2024-12-09 10:39:40.084714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.084740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.084854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.084880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.085003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.085041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.085145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.085173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.085287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.085313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.085411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.085436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.085558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.085586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.085725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.085750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.085835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.085862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.085954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.085981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.086065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.086091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.086192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.086218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.086299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.086325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.086419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.086444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.086522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.086548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.086657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.086683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.086798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.086828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.086911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.086937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.087050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.087077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.087191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.087236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.087335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.087363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.087502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.087529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.087618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.087645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.087739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.087768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.087857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.087882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.087965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.087992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.088100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.088126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.088248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.088287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.088385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.088412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.088555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.088602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.088704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.088737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.088838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.088864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.088971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.088997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.089083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.089109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.089226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.089253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.089346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.089372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.089474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.089503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.089642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.089668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.089748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.089776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.089905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.089932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.090014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.090041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.952 [2024-12-09 10:39:40.090118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.952 [2024-12-09 10:39:40.090150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.952 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.090300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.090326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.090427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.090453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.090530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.090556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.090638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.090666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.090761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.090792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.090911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.090946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.091054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.091081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.091175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.091202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.091299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.091327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.091446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.091473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.091586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.091613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.091715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.091749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.091899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.091947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.092080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.092119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.092228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.092257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.092374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.092400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.092503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.092536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.092673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.092707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.092880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.092939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.093035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.093062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.093162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.093192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.093280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.093306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.093388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.093414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.093546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.093595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.093744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.093790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.093875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.093899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.094011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.094036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.094149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.094175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.094289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.094314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.094404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.094431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.094543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.094572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.094663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.094690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.094768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.094794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.094901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.094927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.095032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.095059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.095158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.095194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.095331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.095357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.095452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.095478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.095569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.095595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.095772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.095804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.095898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.095940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.096052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.096091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.096217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.096245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.096359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.096385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.096472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.096503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.096628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.096675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.096754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.953 [2024-12-09 10:39:40.096780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.953 qpair failed and we were unable to recover it. 00:29:07.953 [2024-12-09 10:39:40.096853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.096879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.096986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.097013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.097089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.097114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.097204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.097233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.097348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.097377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.097479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.097506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.097615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.097641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.097737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.097763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.097892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.097931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.098061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.098089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.098223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.098250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.098369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.098395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.098486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.098511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.098601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.098626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.098764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.098809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.098894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.098920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.099002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.099026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.099116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.099151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.099246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.099270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.099362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.099388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.099488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.099513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.099606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.099631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.099721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.099746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.099861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.099885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.099996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.100026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.100113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.100149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.100252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.100277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.100367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.100392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.100480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.100506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.100592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.100617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.100731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.100757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.100860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.100899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.100998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.101024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.101152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.101189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.101279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.101305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.101432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.101458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.101566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.101591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.101692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.101719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.101840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.101866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.101979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.102018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.102122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.102160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.102278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.102304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.102418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.102444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.102564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.102590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.102680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.102705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.102794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.954 [2024-12-09 10:39:40.102820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.954 qpair failed and we were unable to recover it. 00:29:07.954 [2024-12-09 10:39:40.102901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.102927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.103048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.103075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.103173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.103201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.103318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.103344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.103437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.103464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.103586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.103613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.103707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.103734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.103846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.103872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.103961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.103987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.104088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.104113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.104213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.104241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.104332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.104358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.104446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.104472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.104585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.104611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.104729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.104755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.104852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.104897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.104992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.105018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.105136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.105172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.105259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.105289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.105414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.105439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.105577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.105602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.105747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.105772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.105861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.105887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.105980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.106005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.106096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.106121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.106233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.106258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.106367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.106393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.106486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.106511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.106624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.106650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.106759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.106785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.106869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.106897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.106988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.107028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.107146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.107194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.107307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.107335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.107454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.107480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.107619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.107670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.107778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.107825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.107908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.107933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.108067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.108093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.108207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.955 [2024-12-09 10:39:40.108234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.955 qpair failed and we were unable to recover it. 00:29:07.955 [2024-12-09 10:39:40.108330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.108358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.108447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.108474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.108592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.108618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.108704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.108730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.108823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.108855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.108941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.108973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.109094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.109120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.109237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.109263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.109375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.109404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.109500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.109526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.109619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.109645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.109735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.109762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.109898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.109923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.110010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.110035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.110129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.110163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.110279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.110305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.110415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.110441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.110528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.110555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.110646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.110672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.110787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.110812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.110909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.110948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.111033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.111060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.111179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.111207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.111324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.111350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.111441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.111467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.111578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.111604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.111743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.111768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.111907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.111933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.112034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.112059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.112153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.112192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.112307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.112333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.112444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.112470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.112566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.112594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.112685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.112711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.112858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.112897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.112991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.113018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.113115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.113162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.113253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.113281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.113365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.113392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.113485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.113509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.113592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.113618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.113708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.113732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.113839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.113864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.113977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.114004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.114092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.114118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.114230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.956 [2024-12-09 10:39:40.114262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.956 qpair failed and we were unable to recover it. 00:29:07.956 [2024-12-09 10:39:40.114348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.114374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.114476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.114508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.114643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.114668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.114775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.114801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.114881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.114906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.115001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.115029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.115148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.115182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.115261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.115288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.115369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.115396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.115484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.115510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.115601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.115627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.115737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.115763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.115862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.115890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.116003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.116042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.116136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.116175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.116316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.116343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.116430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.116456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.116551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.116577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.116707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.116754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.116873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.116899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.117012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.117038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.117118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.117153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.117242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.117268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.117379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.117405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.117512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.117546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.117636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.117664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.117760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.117805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.117897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.117925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.118013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.118038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.118122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.118156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.118241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.118267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.118379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.118411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.118495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.118521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.118602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.118628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.118752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.118778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.118853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.118879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.118992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.119017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.119110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.119143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.119224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.119250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.119361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.119387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.119470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.119496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.119573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.119602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.119750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.119775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.119881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.119907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.119988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.120014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.957 [2024-12-09 10:39:40.120133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.957 [2024-12-09 10:39:40.120172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.957 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.120255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.120283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.120365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.120391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.120486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.120519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.120615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.120648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.120784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.120818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.120950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.120976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.121114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.121146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.121234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.121266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.121350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.121377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.121575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.121619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.121752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.121802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.121937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.121963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.122076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.122102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.122224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.122250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.122337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.122363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.122444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.122470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.122568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.122594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.122711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.122737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.122853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.122878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.122981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.123020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.123149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.123187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.123290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.123317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.123404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.123431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.123556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.123583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.123704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.123730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.123859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.123885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.124025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.124052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.124160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.124194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.124309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.124336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.124459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.124486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.124571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.124597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.124682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.124709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.124788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.124815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.124944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.124971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.125063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.125090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.125227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.125266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.125397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.125435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.125527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.125577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.125746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.125778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.125893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.125939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.126086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.126125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.126264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.126292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.126434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.126468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.126605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.126650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.126757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.126800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.958 qpair failed and we were unable to recover it. 00:29:07.958 [2024-12-09 10:39:40.126940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.958 [2024-12-09 10:39:40.126974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.127146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.127197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.127285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.127316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.127421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.127448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.127608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.127652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.127788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.127823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.127929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.127972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.128076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.128102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.128202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.128229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.128362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.128394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.128557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.128589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.128723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.128756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.128959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.128998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.129098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.129125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.129268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.129293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.129404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.129430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.129522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.129547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.129645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.129670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.129753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.129779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.129916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.129942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.130055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.130079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.130166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.130193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.130283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.130310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.130394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.130454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.130560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.130599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.130739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.130772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.130923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.130957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.131065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.131091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.131230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.131269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.131367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.131399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.131487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.131514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.131618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.131652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.131767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.131814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.131946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.131972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.132110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.132136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.132236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.132262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.132347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.132372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.132447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.132494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.132639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.132688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.132852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.132886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.132985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.959 [2024-12-09 10:39:40.133034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.959 qpair failed and we were unable to recover it. 00:29:07.959 [2024-12-09 10:39:40.133112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.133137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.133234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.133260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.133365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.133391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.133480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.133531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.133663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.133707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.133880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.133918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.134076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.134102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.134212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.134240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.134327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.134355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.134487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.134521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.134633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.134681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.134856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.134890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.135009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.135035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.135160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.135187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.135270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.135297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.135399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.135442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.135596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.135629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.135762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.135796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.135911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.135946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.136095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.136121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.136253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.136281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.136383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.136422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.136546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.136594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.136767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.136814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.136951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.137000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.137084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.137110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.137254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.137282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.137373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.137399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.137486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.137519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.137636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.137663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.137754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.137780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.137859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.137885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.138010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.138036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.138123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.138161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.138254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.138281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.138399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.138425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.138509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.138535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.138669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.138702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.138803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.138838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.139007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.139043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.139198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.139225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.139354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.139388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.139533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.139567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.139711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.139745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.139857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.139891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.140008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.140044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.140184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.960 [2024-12-09 10:39:40.140228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.960 qpair failed and we were unable to recover it. 00:29:07.960 [2024-12-09 10:39:40.140333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.140366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.140533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.140566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.140718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.140752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.140871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.140906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.141097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.141123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.141221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.141248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.141383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.141416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.141528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.141561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.141694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.141728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.141845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.141895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.142000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.142026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.142112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.142146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.142244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.142270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.142350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.142377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.142494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.142520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.142600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.142627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.142713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.142739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.142847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.142872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.142968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.142993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.143112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.143137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.143246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.143272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.143374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.143407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.143562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.143595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.143735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.143768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.143879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.143913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.144002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.144027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.144186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.144224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.144356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.144408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.144506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.144554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.144666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.144693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.144779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.144805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.144891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.144916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.145008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.145034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.145119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.145151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.145274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.145320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.145511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.145560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.145685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.145721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.145832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.145865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.145972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.146018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.146165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.146193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.146328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.146376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.146561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.146607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.146742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.146790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.146902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.146928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.147016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.147042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.961 [2024-12-09 10:39:40.147126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.961 [2024-12-09 10:39:40.147158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.961 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.147282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.147333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.147455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.147502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.147619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.147652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.147764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.147790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.147907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.147933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.148024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.148049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.148142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.148169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.148253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.148279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.148372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.148398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.148495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.148520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.148632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.148658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.148773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.148799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.148888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.148914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.149039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.149078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.149179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.149219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.149343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.149370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.149488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.149514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.149602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.149628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.149763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.149789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.149900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.149926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.150020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.150046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.150168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.150194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.150305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.150339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.150483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.150517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.150630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.150666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.150856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.150908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.151009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.151036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.151150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.151177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.151310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.151359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.151520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.151573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.151730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.151768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.151918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.151954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.152098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.152125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.152247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.152274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.152364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.152390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.152522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.152547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.152749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.152783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.152946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.152979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.153113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.153146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.153275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.153301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.153440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.153473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.153591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.153635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.153755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.153790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.153919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.153971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.154079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.154105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.962 [2024-12-09 10:39:40.154229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.962 [2024-12-09 10:39:40.154269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.962 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.154370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.154407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.154574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.154600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.154709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.154735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.154854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.154890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.155094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.155133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.155281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.155307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.155409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.155437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.155562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.155632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.155804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.155839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.155988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.156022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.156176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.156215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.156312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.156339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.156486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.156525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.156672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.156708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.156851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.156886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.157123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.157164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.157267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.157293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.157439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.157477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.157664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.157698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.157831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.157865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.158022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.158056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.158177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.158205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.158296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.158322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.158398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.158428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.158565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.158600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.158747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.158780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.158917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.158961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.159070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.159095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.159253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.159279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.159360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.159386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.159544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.159570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.159656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.159681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.159858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.159893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.160101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.160136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.160285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.160311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.160398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.160441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.160637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.160673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.160827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.160879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.161012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.161046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.161186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.161212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.161299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.963 [2024-12-09 10:39:40.161324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.963 qpair failed and we were unable to recover it. 00:29:07.963 [2024-12-09 10:39:40.161470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.161505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.161611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.161646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.161820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.161854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.161979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.162044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.162171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.162201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.162291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.162317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.162433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.162466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.162603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.162649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.162761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.162796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.162912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.162950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.163082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.163121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.163269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.163297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.163409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.163435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.163524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.163551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.163674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.163721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.163868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.163895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.163991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.164017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.164135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.164166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.164250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.164276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.164395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.164423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.164531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.164563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.164739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.164776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.164900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.164938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.165065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.165102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.165277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.165313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.165450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.165490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.165642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.165676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.165817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.165852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.166002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.166046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.166158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.166190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.166272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.166298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.166413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.166438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.166578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.166614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.166736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.166779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.166968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.167005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.167222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.167262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.167351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.167386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.167502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.167530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.167733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.167769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.167902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.167946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.168069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.168105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.168298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.168326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.168440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.168466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.168581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.168623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.168741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.168778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.964 qpair failed and we were unable to recover it. 00:29:07.964 [2024-12-09 10:39:40.168977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.964 [2024-12-09 10:39:40.169012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.169127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.169171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.169281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.169306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.169425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.169450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.169565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.169590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.169763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.169797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.169923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.169948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.170134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.170208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.170300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.170326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.170413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.170461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.170586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.170622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.170767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.170803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.170918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.170955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.171102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.171149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.171290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.171316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.171463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.171524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.171672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.171723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.171863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.171908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.171998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.172024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.172135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.172168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.172312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.172338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.172443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.172468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.172616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.172642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.172759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.172786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.172874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.172901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.172986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.173012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.173153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.173202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.173317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.173343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.173426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.173452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.173540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.173564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.173687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.173723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.173850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.173880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.174055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.174089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.174228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.174254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.174369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.174394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.174510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.174546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.174678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.174722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.174912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.174949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.175087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.175155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.175305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.175333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.175437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.175465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.175634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.175672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.175844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.175882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.176078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.176162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.176278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.176306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.965 [2024-12-09 10:39:40.176429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.965 [2024-12-09 10:39:40.176457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.965 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.176544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.176597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.176716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.176753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.176960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.177023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.177123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.177158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.177282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.177308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.177401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.177428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.177593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.177630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.177759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.177807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.177893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.177918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.178008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.178034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.178115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.178149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.178231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.178257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.178368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.178407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.178530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.178557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.178643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.178669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.178783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.178808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.178887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.178912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.179019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.179058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.179217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.179246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.179359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.179385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.179460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.179485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.179592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.179618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.179713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.179739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.179830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.179857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.179959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.179988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.180081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.180114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.180263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.180290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.180373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.180399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.180519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.180545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.180696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.180732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.180874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.180910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.181023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.181058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.181178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.181226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.181405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.181441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.181603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.181639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.181787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.181823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.181953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.181978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.182120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.182159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.182242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.182267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.182384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.182421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.182573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.182609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.182786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.182822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.182963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.183014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.183160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.183203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.183301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.183326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.183420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.966 [2024-12-09 10:39:40.183446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.966 qpair failed and we were unable to recover it. 00:29:07.966 [2024-12-09 10:39:40.183612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.183646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.183763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.183788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.183994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.184030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.184155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.184201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.184288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.184313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.184449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.184474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.184654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.184710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.184887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.184927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.185082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.185120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.185276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.185302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.185418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.185444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.185580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.185617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.185734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.185772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.185997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.186060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.186234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.186300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.186439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.186512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.186737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.186802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.186997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.187055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.187193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.187220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.187334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.187366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.187508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.187557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.187651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.187677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.187756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.187782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.187869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.187896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.187974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.187999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.188109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.188134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.188234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.188259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.188340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.188366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.188480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.188506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.188594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.188620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.188714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.188754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.188852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.188881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.189025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.189051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.189146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.189172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.189261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.189287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.189398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.189424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.189512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.189537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.189670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.189714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.189845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.189884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.190002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.190029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.190149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.190176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.190311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.967 [2024-12-09 10:39:40.190359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.967 qpair failed and we were unable to recover it. 00:29:07.967 [2024-12-09 10:39:40.190503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.190556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.190707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.190746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.190883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.190909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.190990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.191018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.191103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.191130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.191281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.191330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.191420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.191446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.191555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.191582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.191720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.191765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.191870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.191896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.191985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.192014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.192104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.192130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.192231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.192257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.192343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.192370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.192467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.192495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.192578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.192630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.192783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.192809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.192896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.192927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.193068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.193094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.193218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.193256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.193403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.193441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.193572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.193610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.193722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.193759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.193917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.193954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.194112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.194157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.194301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.194338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.194520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.194557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.194740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.194776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.194921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.194962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.195099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.195125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.195259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.195297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.195395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.195423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.195545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.195571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.195736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.195772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.195895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.195948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.196099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.196134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.196256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.196283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.196431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.196470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.196628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.196665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.196783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.196827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.197001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.197028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.197125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.197175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.197296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.197323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.197470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.197520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.197628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.197666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.197820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.968 [2024-12-09 10:39:40.197862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.968 qpair failed and we were unable to recover it. 00:29:07.968 [2024-12-09 10:39:40.197985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.198023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.198168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.198195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.198310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.198336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.198473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.198510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.198674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.198711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.198856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.198893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.199077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.199114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.199263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.199303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.199454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.199496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.199619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.199657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.199800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.199837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.199982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.200018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.200213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.200252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.200353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.200381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.200486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.200526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.200691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.200740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.200895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.200933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.201045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.201071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.201206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.201234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.201325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.201351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.201444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.201470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.201559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.201584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.201682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.201718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.201918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.201954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.202104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.202151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.202267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.202293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.202408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.202459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.202610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.202648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.202778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.202816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.203057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.203121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.203320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.203345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.203468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.203537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.203751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.203791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.203911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.203985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.204137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.204198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.204289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.204314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.204403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.204429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.204544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.204584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.204707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.204757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.204951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.204994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.205154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.205212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.205300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.205328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.205418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.205444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.205533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.205589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.205759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.205798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.205922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.205963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.206120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.206172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.969 [2024-12-09 10:39:40.206296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.969 [2024-12-09 10:39:40.206322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.969 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.206413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.206439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.206629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.206655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.206816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.206856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.207016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.207044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.207130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.207165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.207255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.207280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.207418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.207443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.207577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.207615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.207824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.207861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.208016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.208054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.208175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.208222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.208309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.208334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.208416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.208441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.208548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.208588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.208772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.208810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.209016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.209055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.209293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.209319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.209482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.209547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.209764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.209820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.209974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.210031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.210148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.210174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.210277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.210318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.210474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.210520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.210695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.210749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.210877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.210926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.211034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.211060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.211200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.211226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.211364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.211390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.211506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.211532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.211623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.211649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.211736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.211768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.211856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.211883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.211994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.212021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.212112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.212146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.212236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.212262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.212337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.212363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.212454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.212480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.212559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.212585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.212688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.212713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.212814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.212839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.212946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.212971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.213062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.213100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.213210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.213238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.213355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.213381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.213469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.970 [2024-12-09 10:39:40.213495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.970 qpair failed and we were unable to recover it. 00:29:07.970 [2024-12-09 10:39:40.213606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.213632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.213781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.213807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.213903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.213928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.214041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.214066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.214192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.214251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.214382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.214409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.214544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.214599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.214699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.214738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.214870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.214896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.215011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.215036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.215124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.215159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.215338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.215376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.215587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.215646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.215787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.215830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.215957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.215998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.216160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.216207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.216349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.216400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.216514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.216540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.216657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.216683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.216790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.216816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.216937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.216963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.217047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.217073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.217194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.217221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.217308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.217334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.217445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.217470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.217627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.217658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.217800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.217841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.217989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.218027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.218154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.218182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.218314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.218366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.218509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.218549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.218720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.218755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.218902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.218927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.219038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.219064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.219173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.219200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.219322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.219361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.219509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.219547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.219704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.219743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.219863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.219901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.220059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.220086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.220196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.220223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.220329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.220368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.971 [2024-12-09 10:39:40.220555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.971 [2024-12-09 10:39:40.220593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.971 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.220747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.220786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.220909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.220967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.221098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.221124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.221235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.221261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.221368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.221420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.221534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.221560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.221649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.221675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.221780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.221806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.221920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.221946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.222031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.222064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.222151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.222178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.222275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.222301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.222395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.222421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.222511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.222536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.222650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.222676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.222817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.222842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.222987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.223013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.223147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.223187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.223331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.223359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.223472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.223499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.223589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.223616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.223730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.223757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.223897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.223923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.224017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.224043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.224177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.224217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.224319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.224357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.224512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.224561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.224689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.224744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.224852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.224905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.225017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.225043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.225163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.225190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.225293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.225331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.225486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.225529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.225647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.225672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.225755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.225781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.225897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.225924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.226025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.226064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.226167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.226197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.226317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.226344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.226459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.226485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.226573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.226599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.226686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.226711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.226828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.226866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.227017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.227055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.227217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.227242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.972 qpair failed and we were unable to recover it. 00:29:07.972 [2024-12-09 10:39:40.227417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.972 [2024-12-09 10:39:40.227455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.227560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.227598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.227758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.227796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.227919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.227975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.228096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.228147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.228268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.228296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.228409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.228467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.228632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.228671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.228805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.228844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.228991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.229037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.229151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.229179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.229297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.229323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.229428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.229469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.229670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.229709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.229842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.229912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.230070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.230111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.230263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.230290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.230375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.230426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.230600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.230638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.230813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.230848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.230956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.230991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.231165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.231221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.231340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.231367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.231507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.231551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.231708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.231746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.231870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.231913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.232061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.232099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.232242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.232269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.232409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.232435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.232550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.232599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.232772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.232808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.233005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.233066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.233216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.233243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.233354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.233380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.233528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.233568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.233727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.233767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.233893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.233972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.234176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.234202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.234288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.234315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.234399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.234426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.234500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.234527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.234637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.234663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.234834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.234873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.234993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.235048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.235215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.235242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.235339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.235365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.973 [2024-12-09 10:39:40.235464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.973 [2024-12-09 10:39:40.235490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.973 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.235605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.235631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.235798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.235851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.236001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.236047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.236174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.236218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.236326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.236352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.236467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.236493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.236573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.236598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.236719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.236763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.236976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.237017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.237152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.237196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.237338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.237364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.237516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.237579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.237731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.237784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.237930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.237972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.238118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.238171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.238279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.238305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.238383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.238409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.238491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.238540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.238695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.238741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.238925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.238963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.239081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.239119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.239257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.239295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.239456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.239497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.239635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.239673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.239802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.239828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.239959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.240003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.240114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.240148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.240271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.240297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.240420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.240461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.240592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.240618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.240787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.240826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.240946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.240986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.241154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.241202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.241290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.241315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.241429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.241455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.241569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.241594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.241766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.241804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.241931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.241975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.242174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.242239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.242359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.242387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.242567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.242606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.242780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.242820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.974 [2024-12-09 10:39:40.242985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.974 [2024-12-09 10:39:40.243026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.974 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.243148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.243175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.243300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.243326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.243438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.243478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.243649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.243688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.243852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.243893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.244093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.244173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.244311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.244337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.244451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.244476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.244647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.244692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.244821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.244871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.245016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.245042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.245125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.245157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.245278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.245303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.245392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.245419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.245535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.245561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.245709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.245748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.245917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.245961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.246114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.246176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.246306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.246333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.246445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.246471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.246622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.246657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.246793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.246839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.247044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.247087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.247227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.247253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.247344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.247371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.247483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.247508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.247612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.247651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.247864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.247918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.248042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.248077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.248237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.248263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.248372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.248412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.248573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.248615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.248742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.248768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.248960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.249000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.249181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.249207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.249299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.249324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.249443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.249483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.249643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.249689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.249877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.249911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.250052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.250077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.250211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.250237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.250318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.250345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.250480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.250520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.250682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.250722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.250866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.250906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.251041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.251081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.251235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.975 [2024-12-09 10:39:40.251277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.975 qpair failed and we were unable to recover it. 00:29:07.975 [2024-12-09 10:39:40.251406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.251447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.251580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.251627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.251754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.251794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.251974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.252009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.252187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.252222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.252409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.252449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.252620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.252660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.252811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.252851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.253038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.253073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.253225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.253260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.253429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.253469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.253650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.253689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.253855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.253895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.254029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.254069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.254236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.254278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.254415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.254455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.254621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.254661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.254792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.254833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.254996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.255036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.255182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.255225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.255373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.255438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.255677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.255717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.255890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.255931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.256129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.256180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.256354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.256402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.256574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.256614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.256753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.256793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.256953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.256992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.257158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.257209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.257356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.257396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.257522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.257562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.257724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.257764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.257941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.257982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.258178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.258219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.258368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.258408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.258543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.258584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.258746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.258787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.258947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.259012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.259165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.259205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.259402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.259444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.259570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.259612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.259806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.259852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.260013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.260054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.260248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.260290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.260420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.260459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.260596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.260637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.976 qpair failed and we were unable to recover it. 00:29:07.976 [2024-12-09 10:39:40.260773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.976 [2024-12-09 10:39:40.260813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.260999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.261033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.261187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.261223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.261396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.261439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.261612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.261652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.261792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.261832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.261989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.262028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.262195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.262236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.262376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.262416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.262603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.262638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.262782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.262817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.262984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.263025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.263244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.263285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.263411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.263463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.263664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.263705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.263898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.263938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.264082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.264176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.264316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.264360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.264521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.264561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.264701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.264741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.264900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.264940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.265116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.265166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.265449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.265489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.265668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.265709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.265888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.265928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.266065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.266105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.266264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.266306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.266492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.266527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.266670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.266706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.266868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.266911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.267118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.267176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.267378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.267421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.267586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.267628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.267840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.267874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.268036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.268071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.268201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.268243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.268363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.268397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.268532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.268574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.268716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.268758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.268934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.268975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.269116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.269170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.269324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.269370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.269514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.269557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.269736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.269779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.269922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.269965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.270187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.977 [2024-12-09 10:39:40.270231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.977 qpair failed and we were unable to recover it. 00:29:07.977 [2024-12-09 10:39:40.270433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.270475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.270676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.270719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.270886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.270929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.271103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.271156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.271287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.271329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.271467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.271509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.271709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.271752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.271895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.271937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.272127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.272212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.272395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.272437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.272563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.272606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.272772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.272815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.272959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.273002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.273169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.273212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.273415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.273457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.273636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.273679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.273833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.273875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.274033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.274075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.274253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.274296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.274506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.274548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.274714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.274758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.274906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.274983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.275152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.275197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.275403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.275446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.275616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.275659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.275836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.275880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.276048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.276092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.276277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.276319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.276457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.276500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.276677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.276726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.276897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.276939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.277081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.277124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.277342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.277384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.277508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.277550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.277729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.277772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.277924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.277988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.278199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.278264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.278411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.278480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.278676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.278719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.278923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.278965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.279107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.279164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.978 [2024-12-09 10:39:40.279388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.978 [2024-12-09 10:39:40.279433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.978 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.279615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.279659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.279851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.279896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.280073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.280119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.280296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.280340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.280511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.280556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.280692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.280737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.280951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.280996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.281175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.281221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.281434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.281479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.281613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.281658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.281829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.281873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.282086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.282130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.282327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.282372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.282560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.282604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.282791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.282835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.282971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.283016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.283176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.283222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.283389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.283434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.283579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.283624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.283793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.283840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.284058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.284102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.284256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.284301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.284484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.284529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.284678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.284722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.284875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.284919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.285102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.285157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.285338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.285382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.285588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.285640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.285777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.285822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.285999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.286034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.286179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.286214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.286434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.286479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.286631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.286675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.286853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.286899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.287075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.287120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.287325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.287370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.287587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.287631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.287773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.287817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.288004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.288048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.288226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.288273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.288452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.288498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.288714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.288750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.288903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.288937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.289102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.289158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.289303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.289347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.289505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.289549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.289773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.979 [2024-12-09 10:39:40.289818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.979 qpair failed and we were unable to recover it. 00:29:07.979 [2024-12-09 10:39:40.289958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.290002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.290180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.290226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.290400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.290445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.290625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.290669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.290883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.290927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.291075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.291119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.291347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.291392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.291576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.291622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.291813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.291858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.292040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.292084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.292248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.292294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.292473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.292517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.292702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.292736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.292842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.292879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.293026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.293061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.293220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.293265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.293466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.293500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.293644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.293678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.293824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.293868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.294093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.294128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.294285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.294326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.294526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.294584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.294801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.294858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.295125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.295210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.295391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.295435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.295684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.295732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.295949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.295996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.296193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.296242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.296464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.296510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.296660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.296706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.296879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.296926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.297109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.297168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.297329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.297375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.297535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.297582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.297752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.297800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.297988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.298036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.298227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.298277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.298474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.298522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.298706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.298753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.298907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.298977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.299172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.299221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.299404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.299452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.299585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.299631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.299803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.299850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.300126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.980 [2024-12-09 10:39:40.300195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.980 qpair failed and we were unable to recover it. 00:29:07.980 [2024-12-09 10:39:40.300388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.300435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.300587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.300634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.300826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.300874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.301066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.301113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.301331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.301378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.301572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.301619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.301851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.301898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.302090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.302137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.302351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.302398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.302561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.302608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.302748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.302796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.303046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.303101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.303330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.303377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.303535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.303582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.303810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.303857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.304040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.304080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.304206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.304241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.304386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.304422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.304643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.304690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.304851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.304898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.305090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.305137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.305363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.305410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.305590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.305637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.305874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.305909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.306052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.306086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.306294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.306349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.306519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.306588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.306849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.306902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.307173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.307244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.307453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.307487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.307612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.307647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.307825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.307873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.308030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.308078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.308248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.308295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.308477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.308524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.308685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.308733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.308954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.309003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.309204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.309253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.309482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.309530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.309721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.309768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.309971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.310026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.310241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.310289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.310499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.310546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.310776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.310831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.311050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.311104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.311350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.981 [2024-12-09 10:39:40.311398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.981 qpair failed and we were unable to recover it. 00:29:07.981 [2024-12-09 10:39:40.311614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.311689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.311907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.311961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.312132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.312195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.312458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.312534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.312815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.312887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.313130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.313245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.313463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.313541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.313763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.313810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.313956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.314029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.314249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.314332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.314555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.314623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.314887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.314941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.315105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.315173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.315323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.315372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.315575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.315622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.315763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.315827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.316051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.316105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.316343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.316394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.316607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.316658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.316881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.316932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.317134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.317219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.317411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.317461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.317630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.317680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.317895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.317946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.318099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.318167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.318368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.318429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.318649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.318683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.318883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.318933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.319170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.319222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.319432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.319483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.319643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.319694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.319855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.319906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.320154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.320206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.320377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.320427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.320614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.320665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.320859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.320909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.321095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.321219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.321444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.321496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.321707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.321757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.321972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.322028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.322285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.322337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.322576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.982 [2024-12-09 10:39:40.322626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.982 qpair failed and we were unable to recover it. 00:29:07.982 [2024-12-09 10:39:40.322796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.322847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.323049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.323104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.323360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.323409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.323634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.323708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.323956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.324010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.324300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.324373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.324626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.324700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.324897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.324962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.325194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.325245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.325447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.325519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.325798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.325852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.326101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.326167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.326353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.326404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.326623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.326677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.326949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.327002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.327170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.327221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.327460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.327531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.327801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.327852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.328034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.328086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.328400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.328436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.328581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.328615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.328778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.328831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.329013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.329063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.329324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.329359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.329506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.329541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.329760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.329814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.330018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.330073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.330335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.330392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.330598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.330653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.330841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.330896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.331137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.331210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.331381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.331435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.331621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.331677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.331868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.331922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.332192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.332254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.332477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.332531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.332718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.332773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.332982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.333036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.333252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.333307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.333529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.333583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.333779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.333833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.334097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.334165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.334390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.334445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.334708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.334764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.335016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.983 [2024-12-09 10:39:40.335069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.983 qpair failed and we were unable to recover it. 00:29:07.983 [2024-12-09 10:39:40.335291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.335347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.335563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.335618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.335787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.335851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.336066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.336121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.336357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.336412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.336628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.336682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.336886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.336940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.337196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.337251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.337462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.337517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.337722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.337778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.337989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.338044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.338272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.338327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.338588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.338642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.338896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.338950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.339168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.339227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.339410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.339467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.339694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.339749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.339992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.340045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.340281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.340336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.340613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.340667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.340917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.340970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.341154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.341221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.341405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.341461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.341694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.341729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.341881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.341915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.342120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.342192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.342402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.342456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.342712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.342766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.343008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.343062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.343307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.343363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.343546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.343600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.343831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.343881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.344130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.344200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.344451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.344505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.344682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.344735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.344948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.344983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.345107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.345157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.345370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.345424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.345597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.345653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.345867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.345923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.346192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.346249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.346441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.346496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.346738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.346802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.347058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.347113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.347307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.347364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.347617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.347671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.984 [2024-12-09 10:39:40.347922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.984 [2024-12-09 10:39:40.347976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.984 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.348162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.348219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.348415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.348470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.348715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.348770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.348952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.349008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.349236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.349291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.349542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.349597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.349801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.349856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.350054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.350109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.350332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.350406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.350682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.350754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.350958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.351014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.351232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.351308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.351543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.351616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.351826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.351881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.352130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.352200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.352437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.352510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.352755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.352810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.353022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.353078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.353345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.353417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.353667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.353701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.353884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.353919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.354164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.354220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.354495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.354577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.354789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.354844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.355073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.355107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.355235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.355271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.355449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.355484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.355629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.355663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.355778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.355813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.355959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.355994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.356136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.356182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.356325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.356360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.356503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.356538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.356649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.356684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.356802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.356838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.356954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.356988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.357133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.357192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.357384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.357429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.357641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.357685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.357916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.357971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.358200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.358258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.358416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.358480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.358692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.358736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.358903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.358947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.985 qpair failed and we were unable to recover it. 00:29:07.985 [2024-12-09 10:39:40.359130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.985 [2024-12-09 10:39:40.359185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.359370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.359414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.359556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.359602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.359808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.359854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.360100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.360156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.360369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.360440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.360714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.360787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.360995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.361049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.361263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.361357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.361631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.361686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.361889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.361943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.362226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.362302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.362588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.362664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.362919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.362974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.363157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.363215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.363451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.363524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.363770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.363842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.364030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.364084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.364352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.364437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.364640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.364713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.364939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.364994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.365229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.365304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.365497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.365552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.365743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.365798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.366021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.366075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.366284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.366339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.366502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.366559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.366743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.366798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.367010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.367063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.367346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.367401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.367619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.367692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.367865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.367918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.368113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.368188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.368510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.368589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.368816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.368870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.986 qpair failed and we were unable to recover it. 00:29:07.986 [2024-12-09 10:39:40.369083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.986 [2024-12-09 10:39:40.369137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.369343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.369398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.369647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.369721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.369909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.369965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.370188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.370245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.370445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.370518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.370717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.370788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.371013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.371067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.371356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.371429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.371631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.371703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.371934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.371988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.372258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.372331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.372571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.372625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.372846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.372902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.373087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.373154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.373355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.373409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.373592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.373646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.373861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.373915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.374103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.374175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.374373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.374428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.374631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.374684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:07.987 [2024-12-09 10:39:40.374852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:07.987 [2024-12-09 10:39:40.374908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:07.987 qpair failed and we were unable to recover it. 00:29:08.262 [2024-12-09 10:39:40.375168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-12-09 10:39:40.375225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-12-09 10:39:40.375441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-12-09 10:39:40.375504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-12-09 10:39:40.375684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-12-09 10:39:40.375758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.262 [2024-12-09 10:39:40.375984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.262 [2024-12-09 10:39:40.376040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.262 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.376257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.376331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.376608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.376683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.376936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.376990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.377183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.377239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.377476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.377548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.377817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.377871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.378081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.378135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.378415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.378488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.378765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.378841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.379017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.379071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.379278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.379353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.379560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.379633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.379852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.379905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.380091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.380164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.380380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.380434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.380627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.380707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.380915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.380971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.381202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.381257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.381506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.381561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.381747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.381802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.381968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.382022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.382195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.382251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.382489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.382543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.382766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.382822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.383051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.383106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.383330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.383403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.383625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.383697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.383919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.383973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.384183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.384237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.384509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.384580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.384759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.384813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.385080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.385133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.385418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.385489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.385739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.385810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.386028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.386082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.386333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.386406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.386651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.386726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.386944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.387008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.387248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.387322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.387559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.387631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.387820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.387876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.388075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.388129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.388383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.388462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.388703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.388777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.388988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.389042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.389289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.389345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.389569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.389641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.389860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.389913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.390132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.390200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.390427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.390501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.390740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.390812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.391042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.391096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.391321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.391393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.391678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.391749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.391915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.391970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.392156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.392211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.392447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.392519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.263 [2024-12-09 10:39:40.392732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.263 [2024-12-09 10:39:40.392807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.263 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.393031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.393084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.393318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.393391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.393638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.393710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.393925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.393981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.394213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.394290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.394558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.394631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.394868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.394922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.395172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.395227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.395455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.395509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.395762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.395833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.396082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.396136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.396409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.396482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.396723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.396797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.396980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.397036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.397260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.397334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.397535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.397611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.397836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.397891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.398108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.398197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.398465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.398519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.398766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.398829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.399064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.399119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.399387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.399470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.399745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.399817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.400064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.400118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.400417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.400489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.400779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.400852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.401073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.401128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.401384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.401441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.401719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.401790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.401947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.402000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.402163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.402219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.402422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.402494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.402699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.402770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.403049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.403105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.403342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.403416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.403649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.403721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.403917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.403971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.404155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.404212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.404415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.404490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.404728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.404800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.405010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.405065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.405345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.405420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.405620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.405692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.405946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.406001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.406259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.406335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.406598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.406673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.406904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.406960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.407163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.407218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.407444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.407499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.407738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.407811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.408027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.408080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.408379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.408454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.408725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.408798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.409035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.409089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.409326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.409401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.409649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.409722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.409946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.410000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.410238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.410312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.264 qpair failed and we were unable to recover it. 00:29:08.264 [2024-12-09 10:39:40.410502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.264 [2024-12-09 10:39:40.410576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.410784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.410865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.411057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.411112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.411331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.411385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.411600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.411655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.411903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.411958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.412197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.412253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.412481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.412535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.412754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.412809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.413028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.413082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.413338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.413414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.413637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.413711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.413936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.413990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.414219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.414295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.414578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.414651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.414831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.414885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.415066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.415120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.415387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.415440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.415671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.415725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.415883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.415935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.416125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.416192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.416412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.416465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.416685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.416741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.416955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.417008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.417256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.417328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.417544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.417598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.417797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.417851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.418034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.418088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.418313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.418369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.418567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.418621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.418839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.418893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.419169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.419225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.419438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.419511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.419708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.419781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.419987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.420042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.420276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.420331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.420568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.420641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.420890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.420944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.421172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.421229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.421517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.421591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.421788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.421863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.422035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.422098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.422371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.422444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.422641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.422714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.422940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.422993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.423237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.423313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.423537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.423611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.423831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.423886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.424097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.424164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.424449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.424521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.424805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.424878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.425086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.425154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.265 [2024-12-09 10:39:40.425408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.265 [2024-12-09 10:39:40.425486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.265 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.425708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.425781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.425992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.426048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.426341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.426416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.426649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.426722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.426927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.426981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.427206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.427262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.427452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.427529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.427767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.427842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.428007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.428062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.428329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.428403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.428650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.428727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.428907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.428963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.429150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.429208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.429440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.429494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.429721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.429795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.430022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.430076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.430312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.430387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.430667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.430741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.430932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.430985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.431226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.431303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.431564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.431618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.431836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.431890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.432152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.432208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.432434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.432507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.432756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.432828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.433050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.433103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.433339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.433416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.433698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.433772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.433953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.434016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.434282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.434337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.434615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.434690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.434914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.434969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.435173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.435229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.435392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.435447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.435706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.435779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.436025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.436079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.436299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.436373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.436575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.436649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.436833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.436886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.437111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.437179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.437425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.437500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.437691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.437744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.437945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.438000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.438193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.438249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.438464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.438519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.438752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.438826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.439100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.439173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.439437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.439509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.439751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.439825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.440084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.440156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.440436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.440508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.440728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.440801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.440990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.441044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.441310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.441386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.441580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.441655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.441882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.441938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.442229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.442286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.442497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.442552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.442748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.442801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.266 [2024-12-09 10:39:40.443023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.266 [2024-12-09 10:39:40.443077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.266 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.443297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.443351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.443584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.443637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.443882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.443936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.444202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.444259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.444493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.444567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.444799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.444856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.445066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.445122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.445329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.445407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.445647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.445731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.445930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.445984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.446228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.446306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.446567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.446640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.446845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.446898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.447112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.447178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.447397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.447476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.447714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.447787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.447996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.448052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.448318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.448374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.448623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.448695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.448967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.449021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.449276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.449350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.449615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.449669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.449949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.450004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.450283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.450358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.450614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.450686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.450876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.450931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.451099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.451167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.451482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.451537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.451719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.451773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.451945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.451999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.452273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.452327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.452501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.452557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.452810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.452865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.453053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.453108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.453345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.453401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.453626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.453681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.453872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.453927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.454164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.454220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.454481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.454535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.454723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.454777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.454961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.455015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.455252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.455327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.455532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.455607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.455835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.455889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.456107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.456173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.456416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.456489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.456716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.456789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.457014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.457067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.457287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.457375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.457633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.457705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.457874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.457930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.458123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.458191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.458446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.458519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.458766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.458837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.459032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.459086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.459364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.459439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.459721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.459794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.460012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.460066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.460313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.460368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.267 qpair failed and we were unable to recover it. 00:29:08.267 [2024-12-09 10:39:40.460609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.267 [2024-12-09 10:39:40.460683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.460870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.460923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.461102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.461189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.461491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.461564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.461849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.461922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.462107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.462178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.462406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.462482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.462777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.462832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.463108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.463179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.463348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.463404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.463662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.463736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.463985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.464039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.464302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.464377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.464633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.464706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.464942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.464998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.465235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.465311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.465610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.465686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.465876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.465931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.466102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.466169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.466412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.466485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.466677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.466749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.466983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.467037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.467278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.467353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.467575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.467646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.467826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.467880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.468091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.468160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.468431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.468505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.468716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.468792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.469003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.469057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.469291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.469379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.469672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.469745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.469936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.469990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.470179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.470234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.470507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.470579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.470820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.470893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.471083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.471137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.471422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.471496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.471771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.471842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.472090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.472156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.472344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.472400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.472581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.472635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.472893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.472967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.473162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.473218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.473459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.473533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.473800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.473854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.474066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.474121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.474375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.474449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.474701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.474773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.474988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.475042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.475289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.475364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.475615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.475692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.475886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.475940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.476136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.476205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.476448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.476503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.476762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.476817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.268 qpair failed and we were unable to recover it. 00:29:08.268 [2024-12-09 10:39:40.477034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.268 [2024-12-09 10:39:40.477089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.477355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.477431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.477667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.477742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.477933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.477988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.478240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.478296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.478521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.478574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.478839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.478895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.479122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.479194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.479415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.479491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.479725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.479798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.479976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.480030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.480279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.480352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.480566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.480640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.480858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.480912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.481110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.481185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.481421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.481496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.481780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.481852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.482024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.482075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.482326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.482400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.482638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.482713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.482985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.483038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.483269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.483344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.483595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.483667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.483866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.483921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.484165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.484220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.484463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.484535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.484726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.484800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.485046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.485100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.485393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.485475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.485754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.485827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.486045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.486101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.486402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.486476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.486678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.486752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.486936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.486990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.487188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.487244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.487416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.487469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.487649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.487705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.487918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.487972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.488168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.488223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.488440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.488493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.488682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.488736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.488982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.489037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.489263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.489343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.489563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.489637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.489853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.489907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.490090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.490158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.490448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.490522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.490715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.490768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.490952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.491007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.491253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.491328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.491584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.491640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.491832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.491886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.492099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.492165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.492383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.492438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.492626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.492690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.492938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.492992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.493178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.493234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.493424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.493480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.493661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.269 [2024-12-09 10:39:40.493716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.269 qpair failed and we were unable to recover it. 00:29:08.269 [2024-12-09 10:39:40.493917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.493971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.494193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.494248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.494463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.494518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.494720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.494774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.495021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.495075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.495276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.495332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.495494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.495549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.495724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.495779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.496024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.496078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.496320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.496376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.496585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.496658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.496830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.496886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.497096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.497167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.497457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.497531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.497792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.497864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.498133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.498202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.498458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.498531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.498820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.498893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.499104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.499172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.499413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.499495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.499702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.499778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.500001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.500057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.500339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.500416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.500651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.500724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.500995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.501050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.501311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.501386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.501640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.501715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.501927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.501982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.502168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.502224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.502459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.502513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.502737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.502809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.503029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.503083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.503311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.503385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.503594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.503666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.503875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.503928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.504212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.504291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.504572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.504647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.504867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.504922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.505132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.505199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.505401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.505455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.505664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.505721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.505943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.505996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.506197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.506253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.506488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.506558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.506803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.506879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.507111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.507181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.507446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.507500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.507771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.507843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.508095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.508164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.508454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.508509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.508782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.508856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.509126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.509209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.509432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.509514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.509791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.509866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.510044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.510099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.510368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.510442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.510697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.510770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.510987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.511042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.511294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.270 [2024-12-09 10:39:40.511369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.270 qpair failed and we were unable to recover it. 00:29:08.270 [2024-12-09 10:39:40.511585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.511657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.511944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.512000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.512221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.512301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.512552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.512635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.512823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.512878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.513093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.513159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.513448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.513523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.513766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.513838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.514050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.514104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.514369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.514441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.514710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.514782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.514985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.515040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.515215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.515272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.515471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.515546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.515734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.515789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.516010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.516064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.516329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.516403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.516692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.516765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.516981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.517037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.517305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.517380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.517678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.517750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.517975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.518029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.518235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.518291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.518536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.518608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.518897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.518969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.519184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.519241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.519437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.519518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.519782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.519854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.520036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.520091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.520355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.520429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.520694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.520768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.520976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.521031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.521227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.521283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.521463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.521517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.521767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.521821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.522032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.522087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.522290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.522347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.522567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.522621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.522799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.522853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.523055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.523109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.523284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.523345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.523535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.523589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.523809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.523864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.524083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.524159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.524340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.524394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.524594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.524649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.524866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.524919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.525101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.525189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.525397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.525452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.525675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.525729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.525938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.525993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.526252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.526331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.526599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.526673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.526893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.526946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.527234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.527309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.527539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.271 [2024-12-09 10:39:40.527614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.271 qpair failed and we were unable to recover it. 00:29:08.271 [2024-12-09 10:39:40.527857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.527910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.528196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.528273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.528521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.528593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.528829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.528903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.529121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.529204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.529425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.529500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.529735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.529808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.530035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.530090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.530398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.530474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.530712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.530787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.531014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.531069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.531369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.531443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.531651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.531724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.531908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.531961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.532186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.532243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.532484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.532558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.532821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.532875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.533096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.533179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.533384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.533457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.533739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.533811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.534021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.534075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.534317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.534391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.534557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.534613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.534827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.534882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.535054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.535117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.535411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.535485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.535742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.535816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.535984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.536049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.536288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.536363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.536600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.536673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.536902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.536957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.537179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.537235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.537459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.537536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.537793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.537867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.538085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.538156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.538450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.538505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.538794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.538866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.539074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.539128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.539386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.539459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.539662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.539735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.539962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.540016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.540232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.540309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.540539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.540612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.540820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.540875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.541084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.541152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.541358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.541431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.541710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.541785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.541953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.542007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.542188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.542245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.542481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.542536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.542769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.542823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.543048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.543102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.543354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.543428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.543591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.543647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.543879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.543936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.544206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.544262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.544545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.544619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.544827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.544882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.545062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.545116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.272 qpair failed and we were unable to recover it. 00:29:08.272 [2024-12-09 10:39:40.545399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.272 [2024-12-09 10:39:40.545454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.545637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.545711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.545927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.545983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.546206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.546261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.546472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.546527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.546748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.546802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.546978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.547032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.547254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.547310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.547502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.547565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.547757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.547811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.547991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.548046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.548279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.548354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.548644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.548717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.548961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.549014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.549264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.549340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.549591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.549664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.549874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.549929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.550113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.550180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.550427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.550500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.550784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.550856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.551084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.551150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.551389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.551475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.551724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.551798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.552016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.552071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.552328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.552402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.552622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.552675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.552894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.552948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.553185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.553241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.553520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.553593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.553797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.553868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.554056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.554113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.554403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.554481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.554662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.554718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.554928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.554983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.555175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.555231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.555496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.555569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.555786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.555839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.556072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.556126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.556370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.556442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.556697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.556771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.556954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.557010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.557258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.557334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.557570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.557642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.557878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.557932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.558129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.558216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.558493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.558566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.558760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.558833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.559026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.559082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.559308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.559397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.559591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.559665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.559845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.559900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.560129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.560195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.560446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.560501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.560719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.560773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.561020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.561074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.561323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.561399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.561602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.561674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.561895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.561949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.562124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.562195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.562431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.562485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.273 qpair failed and we were unable to recover it. 00:29:08.273 [2024-12-09 10:39:40.562706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.273 [2024-12-09 10:39:40.562780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.563026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.563080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.563282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.563339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.563548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.563604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.563812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.563866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.564091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.564161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.564359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.564415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.564653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.564708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.564926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.564980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.565253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.565309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.565500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.565554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.565730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.565785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.566034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.566089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.566284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.566339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.566553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.566608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.566795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.566850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.567067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.567121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.567342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.567417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.567605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.567677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.567896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.567950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.568222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.568299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.568512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.568566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.568742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.568798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.569011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.569066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.569271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.569347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.569551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.569630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.569883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.569937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.570168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.570224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.570394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.570458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.570634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.570688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.570905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.570959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.571167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.571223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.571472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.571526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.571735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.571789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.571985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.572040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.572281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.572357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.572652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.572706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.572912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.572966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.573244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.573321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.573554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.573608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.573834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.573888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.574174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.574230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.574436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.574509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.574718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.574771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.574938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.574994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.575175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.575250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.575504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.575577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.575833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.575887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.576056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.576110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.576398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.576471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.576711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.576783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.576997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.577051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.577304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.577381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.577611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.274 [2024-12-09 10:39:40.577684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.274 qpair failed and we were unable to recover it. 00:29:08.274 [2024-12-09 10:39:40.577877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.577931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.578168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.578225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.578521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.578606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.578822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.578876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.579048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.579101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.579381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.579455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.579693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.579769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.580042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.580096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.580356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.580431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.580653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.580709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.580961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.581016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.581272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.581348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.581568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.581640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.581861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.581915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.582087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.582165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.582332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.582386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.582608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.582684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.582910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.582965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.583123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.583194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.583381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.583437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.583638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.583692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.583940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.583994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.584178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.584237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.584454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.584509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.584700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.584756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.584978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.585033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.585226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.585283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.585468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.585524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.585754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.585809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.586074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.586128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.586376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.586439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.586611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.586689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.586873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.586927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.587151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.587209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.587430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.587495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.587771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.587845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.588088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.588153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.588369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.588440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.588696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.588771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.589001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.589056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.589283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.589359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.589612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.589687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.589869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.589922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.590134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.590207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.590459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.590514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.590743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.590797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.590983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.591036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.591221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.591279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.591485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.591557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.591815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.591889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.592112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.592182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.592369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.592444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.592711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.592793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.592978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.593034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.593259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.593344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.593586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.593658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.593872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.593928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.594121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.594190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.594475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.594555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.594812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.275 [2024-12-09 10:39:40.594866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.275 qpair failed and we were unable to recover it. 00:29:08.275 [2024-12-09 10:39:40.595074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.595130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.595422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.595477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.595690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.595744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.595954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.596010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.596276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.596352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.596631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.596703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.596933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.596988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.597262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.597336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.597576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.597652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.597879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.597934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.598159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.598215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.598436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.598509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.598757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.598811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.599030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.599085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.599333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.599407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.599694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.599769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.599945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.600000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.600240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.600317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.600612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.600685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.600875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.600930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.601214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.601287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.601591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.601666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.601894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.601949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.602174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.602230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.602430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.602506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.602695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.602748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.602965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.603019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.603271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.603344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.603585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.603658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.603868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.603922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.604160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.604216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.604424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.604502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.604693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.604767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.604968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.605021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.605279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.605362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.605595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.605665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.605879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.605932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.606117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.606184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.606385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.606439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.606678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.606752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.606965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.607020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.607277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.607353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.607605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.607678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.607869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.607923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.608103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.608174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.608369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.608425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.608652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.608726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.608945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.609000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.609238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.609295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.609492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.609546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.609758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.609815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.609990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.610045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.610299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.610355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.610638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.610712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.610966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.611020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.611307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.611382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.611642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.611716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.611907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.611962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.612187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.276 [2024-12-09 10:39:40.612244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.276 qpair failed and we were unable to recover it. 00:29:08.276 [2024-12-09 10:39:40.612524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.612598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.612808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.612863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.613067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.613122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.613306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.613361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.613635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.613708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.613900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.613954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.614153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.614209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.614444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.614517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.614713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.614767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.614942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.614996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.615269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.615345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.615628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.615701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.615924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.615978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.616210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.616285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.616457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.616511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.616764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.616833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.617049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.617105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.617409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.617482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.617745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.617799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.617990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.618045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.618348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.618422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.618620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.618693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.618964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.619019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.619267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.619343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.619584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.619657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.619907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.619960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.620200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.620279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.620474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.620549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.620761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.620815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.620997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.621053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.621323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.621398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.621650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.621722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.621901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.621955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.622190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.622246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.622469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.622523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.622733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.622787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.623022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.623077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.623298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.623373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.623564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.623617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.623870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.623924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.624105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.624179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.624368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.624422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.624640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.624695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.624917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.624972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.625169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.625225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.625414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.625468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.625677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.625732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.625983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.626037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.626259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.626314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.626501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.626554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.626777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.626831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.627044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.627098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.627337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.627391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.627558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.627613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.277 [2024-12-09 10:39:40.627836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.277 [2024-12-09 10:39:40.627890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.277 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.628081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.628159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.628383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.628437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.628669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.628742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.628990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.629043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.629295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.629370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.629561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.629632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.629885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.629938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.630120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.630193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.630518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.630597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.630761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.630817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.630983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.631038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.631294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.631367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.631628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.631702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.631920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.631975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.632198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.632276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.632488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.632543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.632736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.632790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.632987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.633041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.633227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.633284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.633462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.633517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.633769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.633823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.634035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.634089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.634381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.634456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.634721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.634777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.634986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.635040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.635308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.635365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.635608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.635682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.635867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.635923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.636117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.636189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.636469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.636541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.636794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.636869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.637064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.637119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.637347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.637419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.637648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.637724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.637933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.637988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.638181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.638236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.638455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.638531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.638804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.638858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.639032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.639085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.639340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.639396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.639670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.639734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.639949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.640006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.640179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.640236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.640480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.640555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.640763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.640819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.641048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.641102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.641318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.641395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.641669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.641743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.641918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.641973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.642167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.642223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.642483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.642537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.642739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.642793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.643004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.643058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.643318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.643374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.643665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.643738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.643956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.644010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.644297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.644371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.644589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.644671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.644920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.278 [2024-12-09 10:39:40.644976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.278 qpair failed and we were unable to recover it. 00:29:08.278 [2024-12-09 10:39:40.645174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.645229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.645426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.645504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.645717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.645772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.646027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.646081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.646387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.646467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.646712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.646786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.646975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.647029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.647256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.647332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.647607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.647683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.647874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.647927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.648158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.648215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.648504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.648576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.648814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.648868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.649041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.649095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.649329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.649385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.649589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.649643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.649852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.649909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.650113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.650184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.650396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.650451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.650679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.650734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.650970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.651024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.651213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.651271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.651518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.651592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.651815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.651887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.652105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.652188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.652421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.652497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.652740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.652814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.652983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.653038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.653350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.653424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.653651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.653724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.653942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.653996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.654177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.654233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.654470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.654546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.654809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.654882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.655118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.655190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.655403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.655482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.655693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.655764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.655986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.656040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.656237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.656313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.656512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.656586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.656782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.656836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.657020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.657074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.657305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.657360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.657591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.657662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.657834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.657887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.658069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.658124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.658315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.658371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.658560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.658616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.658835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.658899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.659092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.659162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.659409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.659482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.659702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.659775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.659955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.660009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.660266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.660323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.660528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.660606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.660793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.660848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.661073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.661128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.661363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.661435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.661653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.661707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.661898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.661953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.662163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.662219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.279 qpair failed and we were unable to recover it. 00:29:08.279 [2024-12-09 10:39:40.662439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.279 [2024-12-09 10:39:40.662512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.662717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.662791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.662975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.663028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.663296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.663370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.663582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.663655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.663856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.663909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.664175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.664231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.664437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.664513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.664748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.664822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.665082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.665137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.665361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.665433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.665670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.665745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.665933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.665988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.666167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.666223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.666464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.666539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.666811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.666867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.667031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.667085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.667299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.667355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.667547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.667624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.667849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.667903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.668078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.668130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.668424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.668499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.668714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.668790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.668963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.669017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.669299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.669373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.669633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.669708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.669895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.669951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.670131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.670211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.670459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.670529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.670699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.670756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.670922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.670979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.671172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.671227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.671446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.671501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.671690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.671744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.671990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.672044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.672266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.672340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.672581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.672658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.672849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.672905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.673087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.673159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.673454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.673526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.673778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.673832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.674021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.674076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.674309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.674364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.674583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.674639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.674830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.674885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.675097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.675166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.675393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.675447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.675655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.675709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.675923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.675977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.676197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.676253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.676546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.676618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.676806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.676861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.677033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.677089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.677322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.677397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.677617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.677690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.280 qpair failed and we were unable to recover it. 00:29:08.280 [2024-12-09 10:39:40.677900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.280 [2024-12-09 10:39:40.677954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.678166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.678221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.678459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.678534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.678748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.678802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.679042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.679096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.679390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.679465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.679725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.679797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.679991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.680046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.680322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.680380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.680613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.680686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.680868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.680925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.681153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.681209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.681444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.681528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.681818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.681892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.682109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.682180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.682400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.682454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.682646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.682700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.682920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.682974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.683166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.683221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.683450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.683505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.683748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.683819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.684036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.684090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.684349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.684424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.684632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.684708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.684919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.684974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.685257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.685332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.685595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.685668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.685899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.685952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.686177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.686232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.686454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.686531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.686741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.686795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.686980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.281 [2024-12-09 10:39:40.687036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.281 qpair failed and we were unable to recover it. 00:29:08.281 [2024-12-09 10:39:40.687245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.687317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.687606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.687679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.687900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.687960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.688149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.688205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.688421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.688496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.688745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.688799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.688986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.689039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.689248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.689306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.689489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.689546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.689804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.689859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.690045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.690100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.690321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.690375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.690566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.690621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.690795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.690850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.691062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.691115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.691309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.691363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.691578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.691632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.691874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.691928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.692122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.692213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.692460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.692535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.692733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.692816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.693027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.693083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.693285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.693341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.693569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.693623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.693787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.693841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.694060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.694114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.694357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.694412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.694627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.694702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.694927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.694981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.695203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.695282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.695492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.695567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.695787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.695841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.696062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.696117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.696361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.696435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.696682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.696757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.696984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.697038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.697278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.697353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.697575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.697647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.697854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.697908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.698126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.698193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.698427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.698501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.698708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.698783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.699048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.699102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.699367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.699440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.699688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.699761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.699950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.700006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.700241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.700321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.700579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.700653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.700866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.700921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.701101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.701171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.701414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.701486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.701723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.701797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.701983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.702037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.702257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.554 [2024-12-09 10:39:40.702333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.554 qpair failed and we were unable to recover it. 00:29:08.554 [2024-12-09 10:39:40.702624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.702696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.702943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.702997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.703187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.703243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.703488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.703560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.703822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.703894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.704169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.704225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.704431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.704515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.704730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.704802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.704992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.705047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.705306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.705381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.705616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.705691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.705942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.705996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.706179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.706235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.706522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.706595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.706837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.706909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.707095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.707163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.707417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.707494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.707773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.707846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.708037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.708092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.708366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.708441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.708710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.708783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.708968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.709022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.709229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.709308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.709518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.709591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.709845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.709918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.710164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.710220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.710463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.710541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.710736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.710815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.711084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.711150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.711358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.711432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.711676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.711749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.711967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.712023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.712246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.712320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.712541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.712597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.712845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.712918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.713114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.713184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.713383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.713456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.713736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.713809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.713986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.714043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.714305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.714362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.714553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.714607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.714780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.714833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.715037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.715092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.715274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.715329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.715519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.715574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.715750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.715805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.715972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.716035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.716262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.716319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.716488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.716544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.716730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.716787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.717032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.717087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.717320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.717374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.717582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.717637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.717850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.717904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.718105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.718171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.718391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.718463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.718711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.718784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.719008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.719061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.719322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.555 [2024-12-09 10:39:40.719395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.555 qpair failed and we were unable to recover it. 00:29:08.555 [2024-12-09 10:39:40.719642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.719716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.719936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.719990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.720182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.720239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.720466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.720543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.720796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.720869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.721073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.721128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.721380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.721453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.721688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.721763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.721946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.722000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.722191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.722248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.722473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.722547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.722777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.722832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.723054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.723109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.723377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.723452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.723641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.723696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.723881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.723935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.724205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.724282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.724473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.724527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.724696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.724750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.724955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.725009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.725230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.725286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.725455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.725510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.725683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.725737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.725927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.725981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.726163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.726217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.726459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.726513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.726724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.726778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.727001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.727064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.727248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.727303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.727485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.727539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.727701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.727757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.727963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.728018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.728201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.728257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.728456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.728511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.728734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.728789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.729052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.729106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.729370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.729442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.729672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.729748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.729954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.730007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.730245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.730319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.730560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.730634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.730891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.730946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.731164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.731220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.731480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.731556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.731768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.731822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.732043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.732097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.732406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.732504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.732729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.732798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.733049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.733104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.733336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.733400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.733655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.733724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.733933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.733999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.734305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.734366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.734584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.734658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.734955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.735029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.735216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.556 [2024-12-09 10:39:40.735272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.556 qpair failed and we were unable to recover it. 00:29:08.556 [2024-12-09 10:39:40.735501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.735558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.735844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.735917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.736150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.736205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.736396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.736475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.736731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.736803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.736985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.737039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.737289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.737364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.737612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.737685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.737895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.737968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.738184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.738240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.738462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.738534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.738748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.738829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.739044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.739098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.739415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.739513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.739754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.739827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.740086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.740191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.740435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.740501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.740700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.740766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.741050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.741115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.741347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.741405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.741654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.741720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.741934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.741999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.742252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.742313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.742530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.742630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.742832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.742900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.743167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.743227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.743468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.743534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.743791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.743859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.744162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.744221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.744437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.744520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.744699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.744763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.744992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.745057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.745285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.745343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.745630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.745695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.745972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.746038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.746299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.746358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.746580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.746646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.746956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.747024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.747314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.747374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.747607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.747672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.747922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.747998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.748302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.748360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.748565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.748632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.748921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.748986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.749224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.749284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.749467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.749524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.749747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.749828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.750060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.750115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.750392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.750473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.750722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.750789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.751057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.751123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.751419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.751511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.751816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.751883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.752105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.752198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.752416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.752496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.752817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.557 [2024-12-09 10:39:40.752882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.557 qpair failed and we were unable to recover it. 00:29:08.557 [2024-12-09 10:39:40.753097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.753163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.753348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.753404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.753694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.753763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.754050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.754115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.754342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.754401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.754732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.754799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.755050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.755114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.755351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.755409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.755762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.755827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.756059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.756128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.756396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.756453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.756795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.756859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.757066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.757133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.757359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.757425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.757674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.757739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.757961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.758026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.758329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.758395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.758635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.758699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.758902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.758969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.759190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.759256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.759544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.759609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.759865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.759930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.760188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.760265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.760469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.760534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.760736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.760800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.761003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.761068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.761332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.761399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.761643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.761709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.761921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.761986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.762210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.762276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.762524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.762590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.762821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.762885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.763131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.763208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.763493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.763559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.763797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.763863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.764123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.764204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.764444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.764509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.764756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.764822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.765072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.765176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.765441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.765507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.765759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.765827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.766058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.766122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.766397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.766463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.766761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.766826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.767053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.767119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.767432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.767497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.767743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.767810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.768061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.768129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.768369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.768435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.768700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.768764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.768984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.769054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.769358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.769426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.769626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.769694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.769978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.770043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.770267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.770333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.770555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.770621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.770859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.770925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.771208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.771276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.771479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.771543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.558 [2024-12-09 10:39:40.771792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.558 [2024-12-09 10:39:40.771858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.558 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.772097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.772180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.772402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.772467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.772677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.772752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.773005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.773071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.773321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.773389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.773640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.773706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.773970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.774034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.774280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.774348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.774591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.774656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.774883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.774948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.775175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.775240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.775526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.775591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.775803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.775869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.776079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.776159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.776453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.776519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.776761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.776827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.777151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.777218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.777468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.777535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.777826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.777891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.778165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.778231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.778449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.778517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.778772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.778838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.779056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.779120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.779353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.779418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.779704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.779769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.779968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.780033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.780280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.780346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.780561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.780627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.780837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.780902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.781114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.781197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.781445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.781513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.781755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.781820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.782112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.782218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.782443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.782510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.782722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.782788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.782998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.783066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.783370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.783451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.783711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.783778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.784028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.784093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.784401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.784468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.784710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.784776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.785025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.785091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.785360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.785436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.785728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.785794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.786052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.786118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.786387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.786455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.786677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.786744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.787034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.787100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.787344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.787409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.787661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.787726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.787942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.788008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.788258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.788326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.788619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.788684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.788892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.788959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.789249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.789316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.559 [2024-12-09 10:39:40.789604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.559 [2024-12-09 10:39:40.789670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.559 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.789937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.790003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.790291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.790358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.790608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.790674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.790921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.790986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.791235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.791303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.791555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.791622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.791877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.791942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.792172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.792241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.792505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.792571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.792846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.792911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.793169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.793238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.793503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.793568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.793773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.793840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.794158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.794226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.794451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.794516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.794770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.794834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.795076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.795160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.795367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.795433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.795640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.795705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.795953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.796017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.796288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.796355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.796648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.796712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.797016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.797080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.797312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.797382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.797594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.797658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.797910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.797975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.798197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.798275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.798520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.798584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.798852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.798916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.799201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.799268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.799466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.799532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.799822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.799888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.800168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.800236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.800455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.800521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.800770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.800837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.801132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.801211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.801494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.801559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.801766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.801831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.802113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.802190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.802436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.802503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.802795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.802862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.803078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.803155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.803442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.803506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.803719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.803786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.804063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.804127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.804411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.804478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.804765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.804830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.805044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.805109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.805338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.805403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.805655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.805721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.805974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.806039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.806299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.806365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.806572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.806639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.806923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.806989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.807187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.807254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.807481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.807550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.807839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.807905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.808161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.808230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.808524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.808589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.560 [2024-12-09 10:39:40.808866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.560 [2024-12-09 10:39:40.808930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.560 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.809157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.809224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.809472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.809538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.809771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.809836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.810081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.810158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.810422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.810488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.810702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.810767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.811017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.811099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.811411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.811477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.811693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.811758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.811998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.812063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.812372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.812439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.812681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.812745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.812990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.813056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.813283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.813351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.813565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.813630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.813848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.813915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.814179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.814245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.814491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.814556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.814844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.814909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.815166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.815232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.815492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.815557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.815740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.815806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.815993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.816058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.816335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.816402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.816647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.816715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.816946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.817011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.817272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.817338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.817579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.817643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.817925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.817991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.818241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.818308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.818505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.818570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.818821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.818886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.819100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.819179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.819485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.819549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.819767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.819831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.820109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.820189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.820413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.820479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.820686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.820752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.820962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.821027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.821237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.821302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.821518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.821585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.821844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.821909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.822125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.822205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.822487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.822552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.822845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.822910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.823190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.823255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.823473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.823549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.823831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.823897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.824107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.824212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.824417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.824485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.824734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.824799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.825074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.825155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.825416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.825481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.825768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.825831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.826130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.826207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.826460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.826529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.826789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.826854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.827070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.561 [2024-12-09 10:39:40.827137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.561 qpair failed and we were unable to recover it. 00:29:08.561 [2024-12-09 10:39:40.827386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.827451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.827664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.827728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.827956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.828022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.828239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.828305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.828598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.828663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.828854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.828918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.829124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.829201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.829490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.829554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.829843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.829907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.830169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.830235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.830452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.830518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.830762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.830827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.831108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.831187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.831427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.831494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.831745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.831810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.832078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.832156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.832411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.832474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.832774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.832838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.833044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.833112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.833413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.833477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.833689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.833756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.834010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.834078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.834387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.834454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.834715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.834782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.835025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.835090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.835355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.835421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.835621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.835686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.835969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.836036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.836279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.836359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.836609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.836678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.836879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.836943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.837196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.837263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.837516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.837585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.837811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.837879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.838098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.838180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.838420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.838484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.838690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.838757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.839002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.839067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.839369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.839434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.839678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.839742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.839966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.840033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.840257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.840322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.840588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.840653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.840899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.840966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.841220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.841286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.841537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.841602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.841842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.841907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.842115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.842193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.842449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.842513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.842723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.842787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.843043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.843107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.843410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.843475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.843766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.843830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.844088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.844173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.562 [2024-12-09 10:39:40.844408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.562 [2024-12-09 10:39:40.844473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.562 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.844703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.844771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.844977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.845044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.845352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.845419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.845719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.845784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.846034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.846101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.846357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.846422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.846671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.846740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.846947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.847014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.847255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.847322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.847610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.847675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.847970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.848037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.848303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.848372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.848673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.848739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.848940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.849015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.849242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.849309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.849558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.849624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.849873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.849937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.850226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.850292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.850509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.850574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.850808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.850872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.851167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.851232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.851493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.851558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.851855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.851920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.852207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.852278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.852570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.852636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.852885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.852953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.853198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.853266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.853486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.853555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.853829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.853894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.854105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.854185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.854389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.854456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.854677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.854744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.854992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.855056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.855354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.855419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.855704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.855770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.856052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.856119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.856398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.856462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.856711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.856778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.857021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.857088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.857311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.857377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.857643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.857709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.857921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.857986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.858214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.858281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.858483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.858549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.858750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.858817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.859081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.859162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.859394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.859458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.859709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.859774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.859987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.860054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.860316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.860384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.860579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.860645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.860932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.860996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.861286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.861352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.861608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.861685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.861930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.861996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.862282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.862348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.862633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.862698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.862943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.863008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.863298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.863366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.563 [2024-12-09 10:39:40.863585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.563 [2024-12-09 10:39:40.863651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.563 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.863906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.863971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.864220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.864288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.864504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.864570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.864855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.864922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.865183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.865248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.865539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.865604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.865813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.865878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.866117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.866218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.866463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.866528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.866719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.866785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.867026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.867091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.867358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.867425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.867706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.867773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.868072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.868137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.868368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.868433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.868658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.868724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.868928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.868993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.869255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.869320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.869516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.869581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.869804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.869869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.870085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.870167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.870433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.870497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.870716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.870781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.870993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.871059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.871299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.871365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.871649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.871715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.871973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.872037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.872308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.872374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.872655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.872720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.872949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.873015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.873301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.873367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.873580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.873649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.873851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.873915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.874196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.874280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.874570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.874636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.874889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.874956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.875246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.875312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.875597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.875663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.875928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.875993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.876234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.876302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.876545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.876611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.876897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.876962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.877200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.877267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.877526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.877591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.877846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.877911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.878111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.878192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.878407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.878476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.878728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.878794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.879016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.879081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.879300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.879371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.879614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.879680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.879935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.879999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.880291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.880359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.880583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.880647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.880891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.880955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.881222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.881288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.881510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.881575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.881825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.881893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.882186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.882253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.564 [2024-12-09 10:39:40.882505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.564 [2024-12-09 10:39:40.882570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.564 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.882872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.882938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.883183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.883249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.883503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.883569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.883828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.883894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.884117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.884197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.884396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.884463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.884710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.884779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.884977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.885043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.885312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.885378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.885657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.885721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.885916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.885983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.886250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.886317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.886562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.886627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.886835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.886911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.887170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.887257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.887533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.887597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.887839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.887904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.888168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.888235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.888495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.888564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.888812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.888877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.889087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.889169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.889420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.889487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.889786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.889853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.890188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.890257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.890468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.890534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.890744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.890809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.891054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.891119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.891377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.891444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.891691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.891754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.892044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.892109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.892377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.892443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.892738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.892803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.893046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.893110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.893326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.893395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.893622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.893686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.893884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.893951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.894217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.894285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.894530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.894593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.894843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.894909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.895123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.895205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.895468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.895533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.895741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.895806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.896029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.896096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.896326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.896391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.896682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.896747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.896995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.897061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.897274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.897342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.897538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.897605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.897828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.897891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.898184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.898250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.898495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.898564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.898855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.898920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.899172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.899241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.899481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.899557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.565 qpair failed and we were unable to recover it. 00:29:08.565 [2024-12-09 10:39:40.899803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.565 [2024-12-09 10:39:40.899870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.900166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.900233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.900516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.900582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.900875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.900939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.901169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.901238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.901495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.901561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.901848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.901913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.902189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.902256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.902488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.902554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.902838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.902902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.903156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.903222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.903425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.903490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.903768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.903833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.904137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.904216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.904469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.904534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.904733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.904798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.905067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.905131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.905395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.905460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.905699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.905764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.906046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.906110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.906392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.906460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.906659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.906724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.906993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.907058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.907329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.907396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.907598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.907667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.907977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.908043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.908323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.908390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.908647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.908711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.908962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.909030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.909293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.909361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.909609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.909675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.909882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.909947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.910210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.910276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.910561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.910626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.910829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.910894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.911151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.911218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.911469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.911534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.911798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.911864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.912093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.912171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.912413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.912489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.912751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.912816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.913013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.913081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.913388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.913455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.913747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.913812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.914088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.914182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.914438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.914505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.914757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.914825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.915112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.915194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.915477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.915542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.915751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.915819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.916076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.916159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.916403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.916467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.916762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.916828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.917042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.917112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.917427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.917492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.917737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.917802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.918087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.918168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.918459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.918525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.918772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.918837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.919045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.566 [2024-12-09 10:39:40.919110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.566 qpair failed and we were unable to recover it. 00:29:08.566 [2024-12-09 10:39:40.919357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.919421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.919624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.919688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.919935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.920004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.920328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.920395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.920677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.920743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.920959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.921025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.921216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.921292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.921555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.921623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.921872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.921937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.922185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.922254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.922467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.922532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.922779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.922843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.923132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.923214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.923459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.923524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.923807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.923871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.924072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.924168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.924462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.924527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.924731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.924798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.925083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.925161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.925409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.925473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.925769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.925835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.926051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.926115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.926437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.926502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.926745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.926812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.927049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.927117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.927422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.927487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.927771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.927836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.928045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.928113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.928371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.928436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.928640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.928704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.928967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.929036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.929320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.929386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.929620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.929685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.929888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.929955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.930238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.930304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.930554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.930621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.930866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.930934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.931231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.931298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.931563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.931629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.931844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.931912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.932202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.932269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.932485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.932551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.932767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.932831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.933041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.933106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.933408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.933473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.933766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.933831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.934080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.934186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.934492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.934559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.934795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.934860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.935159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.935226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.935466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.935530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.935765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.935829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.936086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.936164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.936417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.936485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.936754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.936819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.937070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.937152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.937368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.937433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.937652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.937719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.937972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.938038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.938279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.938344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.567 [2024-12-09 10:39:40.938656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.567 [2024-12-09 10:39:40.938723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.567 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.938940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.939005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.939290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.939357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.939594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.939659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.939902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.939969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.940211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.940277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.940542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.940607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.940856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.940921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.941189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.941255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.941472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.941540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.941821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.941888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.942127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.942207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.942451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.942517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.942813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.942881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.943124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.943201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.943451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.943515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.943754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.943819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.944063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.944127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.944398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.944462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.944711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.944778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.945076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.945155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.945405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.945470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.945690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.945757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.946013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.946079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.946289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.946356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.946604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.946670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.946870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.946947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.947249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.947316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.947578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.947642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.947848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.947915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.948108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.948185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.948431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.948498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.948753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.948818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.949020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.949090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.949201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2cf30 (9): Bad file descriptor 00:29:08.568 [2024-12-09 10:39:40.949556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.949634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.949894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.949961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.950270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.950340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.950642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.950707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.950962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.951026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.951359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.951425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.951713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.951778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.952026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.952089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.952359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.952424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.952669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.952733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.952953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.953018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.953249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.953315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.953611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.953676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.953885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.953949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.954232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.954299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.954580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.954645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.954849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.954912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.955168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.955236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.955526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.955624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.955908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.955978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.956228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.956301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.956565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.956635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.568 [2024-12-09 10:39:40.956861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.568 [2024-12-09 10:39:40.956927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.568 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.957216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.957284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.957531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.957598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.957844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.957909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.958195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.958262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.958544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.958611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.958907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.958971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.959220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.959286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.959543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.959608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.959894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.959958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.960230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.960298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.960538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.960602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.960848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.960912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.961161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.961227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.961480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.961545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.961855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.961920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.962169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.962238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.962528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.962594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.962792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.962857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.963113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.963196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.963484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.963549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.963843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.963908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.964116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.964195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.964442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.964521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.964786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.964851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.965069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.965135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.965430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.965496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.965797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.965861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.966110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.966191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.966452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.966521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.966740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.966804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.967096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.967173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.967428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.967493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.967776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.967840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.968103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.968182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.968469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.968534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.968774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.968841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.969109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.969192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.969462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.969528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.969737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.969805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.970065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.970130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.970401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.970467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.970720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.970784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.971036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.971102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.971415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.971480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.971717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.971782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.972068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.972132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.972401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.972465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.972672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.972740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.972997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.973062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.973376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.973443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.973705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.973770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.974065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.974130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.974372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.974436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.974707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.974772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.974971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.975036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.975298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.975364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.975654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.975718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.975969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.976033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.976335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.569 [2024-12-09 10:39:40.976402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.569 qpair failed and we were unable to recover it. 00:29:08.569 [2024-12-09 10:39:40.976653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.976717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.976964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.977028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.977279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.977353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.977612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.977689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.977910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.977978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.978199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.978264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.978503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.978570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.978867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.978933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.979173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.979242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.979495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.979560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.979801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.979867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.980169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.980235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.980459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.980528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.980786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.980851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.570 [2024-12-09 10:39:40.981055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.570 [2024-12-09 10:39:40.981122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.570 qpair failed and we were unable to recover it. 00:29:08.841 [2024-12-09 10:39:40.981450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.841 [2024-12-09 10:39:40.981516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.841 qpair failed and we were unable to recover it. 00:29:08.841 [2024-12-09 10:39:40.981730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.981799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.982101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.982185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.982454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.982519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.982773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.982846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.983136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.983217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.983477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.983546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.983794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.983868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.984116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.984204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.984446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.984513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.984728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.984799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.985007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.985074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.985329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.985401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.985649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.985714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.985994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.986058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.986516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.986585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.986851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.986917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.987205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.987272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.987515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.987579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.987807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.987871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.988105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.988184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.988436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.988501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.988695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.988761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.988985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.989050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.989365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.989432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.989632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.989698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.989915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.989980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.990202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.990269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.990517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.990596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.990870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.990935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.991158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.991249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.991508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.991574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.842 [2024-12-09 10:39:40.991815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.842 [2024-12-09 10:39:40.991881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.842 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.992156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.992223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.992445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.992512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.992763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.992828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.993094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.993173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.993420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.993487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.993739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.993804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.994026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.994094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.994355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.994421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.994669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.994734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.994959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.995025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.995248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.995314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.995553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.995619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.995898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.995963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.996220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.996286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.996498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.996563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.996770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.996837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.997099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.997195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.997451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.997516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.997762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.997829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.998075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.998156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.998452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.998516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.998804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.998868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.999154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.999221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.999511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.999575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:40.999825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:40.999889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:41.000133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:41.000213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:41.000459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:41.000525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:41.000782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:41.000846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:41.001094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:41.001173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:41.001395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:41.001463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.843 [2024-12-09 10:39:41.001769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.843 [2024-12-09 10:39:41.001833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.843 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.002051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.002117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.002418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.002482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.002674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.002740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.002985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.003051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.003270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.003349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.003593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.003660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.003887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.003953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.004188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.004254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.004453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.004518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.004725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.004791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.005038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.005105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.005444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.005510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.005808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.005872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.006121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.006201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.006411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.006478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.006761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.006826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.007063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.007129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.007386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.007451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.007720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.007786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.008036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.008101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.008317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.008386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.008647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.008712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.008956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.009021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.009319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.009385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.009675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.009740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.009941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.010006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.010210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.010281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.010529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.010597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.844 qpair failed and we were unable to recover it. 00:29:08.844 [2024-12-09 10:39:41.010887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.844 [2024-12-09 10:39:41.010952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.011169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.011237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.011524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.011589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.011892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.011957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.012248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.012314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.012578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.012644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.012842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.012908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.013202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.013269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.013532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.013597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.013838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.013904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.014194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.014262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.014517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.014581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.014843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.014909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.015206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.015272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.015526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.015591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.015885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.015950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.016209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.016275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.016503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.016570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.016874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.016940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.017206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.017273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.017559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.017623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.017913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.017978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.018239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.018306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.018548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.018614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.018844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.018909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.019196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.019263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.019515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.019582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.019797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.019864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.020065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.020133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.020439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.020504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.845 qpair failed and we were unable to recover it. 00:29:08.845 [2024-12-09 10:39:41.020806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.845 [2024-12-09 10:39:41.020872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.021183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.021251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.021498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.021565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.021812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.021878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.022084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.022162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.022421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.022486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.022737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.022802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.023099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.023179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.023431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.023497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.023787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.023852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.024128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.024213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.024476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.024541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.024726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.024790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.025070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.025170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.025429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.025497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.025709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.025775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.026062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.026127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.026342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.026409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.026690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.026756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.027061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.027126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.027397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.027462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.027722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.027787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.028071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.028151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.028445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.028509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.028754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.028819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.029107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.029186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.029399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.029463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.029732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.846 [2024-12-09 10:39:41.029798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.846 qpair failed and we were unable to recover it. 00:29:08.846 [2024-12-09 10:39:41.030079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.030155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.030413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.030477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.030767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.030833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.031050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.031114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.031382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.031447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.031730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.031795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.032085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.032162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.032452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.032516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.032758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.032825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.033118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.033216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.033463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.033529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.033819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.033885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.034169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.034237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.034474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.034540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.034821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.034886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.035164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.035230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.035468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.035533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.035775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.035842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.036094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.036171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.036414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.036480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.036726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.036791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.037035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.037100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.037377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.037442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.847 qpair failed and we were unable to recover it. 00:29:08.847 [2024-12-09 10:39:41.037671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.847 [2024-12-09 10:39:41.037736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.038028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.038092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.038394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.038470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.038757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.038823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.039068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.039132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.039403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.039471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.039715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.039781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.040067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.040132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.040370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.040437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.040726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.040790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.041048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.041113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.041368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.041437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.041722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.041787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.042079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.042159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.042370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.042438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.042682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.042746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.042977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.043042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.043276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.043343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.043624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.043688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.043904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.043971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.044263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.044329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.044575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.044642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.044935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.045001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.045301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.045367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.045628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.045695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.045984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.046049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.046279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.046346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.046592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.046660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.046907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.046974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.848 qpair failed and we were unable to recover it. 00:29:08.848 [2024-12-09 10:39:41.047257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.848 [2024-12-09 10:39:41.047326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.047583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.047649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.047898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.047963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.048265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.048330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.048567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.048632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.048921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.048985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.049264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.049330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.049612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.049676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.049892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.049958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.050185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.050251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.050460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.050524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.050768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.050832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.051098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.051182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.051433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.051508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.051747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.051812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.052104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.052187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.052436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.052501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.052760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.052825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.053086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.053169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.053424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.053491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.053745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.053810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.054035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.054100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.054375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.054441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.054633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.054697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.054939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.055004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.055297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.055364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.055588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.055655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.055929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.055993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.056275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.056341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.056583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.849 [2024-12-09 10:39:41.056651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.849 qpair failed and we were unable to recover it. 00:29:08.849 [2024-12-09 10:39:41.056906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.056971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.057258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.057324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.057582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.057647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.057939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.058002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.058292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.058358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.058600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.058667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.058921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.058986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.059229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.059295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.059541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.059610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.059828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.059893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.060193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.060261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.060506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.060571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.060826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.060890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.061174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.061240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.061518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.061582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.061836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.061901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.062158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.062225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.062444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.062508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.062761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.062825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.063058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.063123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.063348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.063412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.063667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.063731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.063927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.063990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.064273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.064350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.064594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.064660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.064879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.064948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.065198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.065265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.065551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.850 [2024-12-09 10:39:41.065617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.850 qpair failed and we were unable to recover it. 00:29:08.850 [2024-12-09 10:39:41.065832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.065900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.066121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.066202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.066454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.066519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.066812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.066876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.067174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.067241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.067525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.067590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.067871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.067934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.068164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.068230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.068495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.068560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.068822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.068888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.069120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.069199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.069488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.069553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.069795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.069858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.070164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.070230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.070483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.070548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.070743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.070807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.071089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.071167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.071459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.071523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.071766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.071831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.072080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.072162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.072426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.072492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.072744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.072808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.073074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.073172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.073426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.073493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.073744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.073809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.074069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.074135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.074412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.074477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.074686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.074751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.074994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.075060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.851 qpair failed and we were unable to recover it. 00:29:08.851 [2024-12-09 10:39:41.075351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.851 [2024-12-09 10:39:41.075417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.075664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.075729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.075992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.076057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.076274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.076340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.076607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.076671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.076963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.077027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.077326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.077404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.077653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.077721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.077928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.077995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.078212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.078279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.078526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.078592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.078880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.078945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.079211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.079276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.079495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.079561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.079770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.079838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.080098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.080176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.080461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.080526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.080778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.080843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.081100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.081196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.081437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.081502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.081732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.081799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.082044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.082110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.082388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.082457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.082684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.082749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.082937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.083002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.083245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.083311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.083601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.083666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.083920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.083984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.852 [2024-12-09 10:39:41.084231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.852 [2024-12-09 10:39:41.084298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.852 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.084546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.084613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.084812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.084881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.085128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.085205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.085505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.085570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.085866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.085931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.086183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.086249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.086492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.086557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.086837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.086902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.087201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.087268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.087512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.087577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.087860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.087924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.088206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.088272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.088537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.088602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.088854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.088918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.089126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.089223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.089437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.089504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.089731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.089797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.090038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.090113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.090386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.090451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.090700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.090765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.091065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.091130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.091404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.091469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.091684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.091750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.853 qpair failed and we were unable to recover it. 00:29:08.853 [2024-12-09 10:39:41.092049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.853 [2024-12-09 10:39:41.092115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.092375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.092442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.092695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.092760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.092991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.093056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.093359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.093426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.093702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.093766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.094003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.094069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.094288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.094355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.094611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.094679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.094971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.095037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.095317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.095385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.095634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.095698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.095979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.096044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.096249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.096316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.096605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.096670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.096930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.096999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.097277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.097343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.097590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.097655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.097909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.097974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.098206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.098272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.098516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.098582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.098881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.098947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.099214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.099279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.099487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.099552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.099795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.099861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.100118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.100197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.100449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.100512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.100764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.100831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.101116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.101196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.854 qpair failed and we were unable to recover it. 00:29:08.854 [2024-12-09 10:39:41.101473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.854 [2024-12-09 10:39:41.101537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.101786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.101850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.102106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.102185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.102489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.102557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.102852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.102916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.103166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.103244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.103527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.103593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.103794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.103858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.104113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.104210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.104468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.104533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.104819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.104884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.105132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.105212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.105468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.105537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.105821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.105885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.106156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.106223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.106486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.106553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.106816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.106880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.107179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.107246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.107491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.107556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.107856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.107921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.108176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.108242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.108495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.108560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.108810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.108875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.109151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.109217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.109485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.109550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.109783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.109848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.110101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.110179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.110441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.110508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.110787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.110853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.111097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.855 [2024-12-09 10:39:41.111175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.855 qpair failed and we were unable to recover it. 00:29:08.855 [2024-12-09 10:39:41.111425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.111490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.111773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.111839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.112106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.112203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.112492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.112556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.112768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.112834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.113070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.113136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.113405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.113469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.113748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.113812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.114028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.114096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.114354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.114418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.114672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.114737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.115028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.115093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.115349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.115413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.115665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.115730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.115947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.116013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.116249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.116326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.116544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.116612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.116858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.116924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.117202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.117269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.117520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.117588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.117885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.117950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.118179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.118245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.118474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.118539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.118817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.118880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.119130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.119215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.119428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.119495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.119730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.119795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.120048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.120116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.120415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.856 [2024-12-09 10:39:41.120481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.856 qpair failed and we were unable to recover it. 00:29:08.856 [2024-12-09 10:39:41.120699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.120765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.120977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.121041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.121346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.121413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.121703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.121769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.122024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.122089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.122376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.122439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.122733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.122798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.123003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.123068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.123345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.123411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.123716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.123781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.124035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.124099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.124344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.124412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.124666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.124731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.125001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.125065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.125294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.125362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.125674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.125739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.125989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.126052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.126318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.126384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.126648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.126713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.126957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.127021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.127272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.127339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.127591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.127656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.127869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.127934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.128175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.128241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.128472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.128537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.128832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.128897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.129161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.129242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.129504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.129568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.857 [2024-12-09 10:39:41.129856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.857 [2024-12-09 10:39:41.129920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.857 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.130173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.130239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.130465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.130529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.130770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.130835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.131080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.131159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.131389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.131463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.131757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.131823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.132048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.132113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.132384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.132448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.132665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.132730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.132944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.133009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.133235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.133300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.133553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.133620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.133867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.133933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.134199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.134265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.134450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.134515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.134752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.134817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.135074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.135171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.135423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.135488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.135709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.135773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.136075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.136154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.136375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.136451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.136680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.136747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.136951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.137019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.137286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.137353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.137616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.137684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.137923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.137990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.138200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.138270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.138567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.138633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.858 [2024-12-09 10:39:41.138886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.858 [2024-12-09 10:39:41.138951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.858 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.139175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.139242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.139461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.139527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.139776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.139840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.140054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.140118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.140379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.140445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.140675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.140739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.140985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.141052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.141348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.141416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.141622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.141698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.141919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.141985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.142265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.142331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.142544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.142614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.142856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.142921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.143206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.143272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.143521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.143587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.143812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.143879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.144116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.144215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.144425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.144492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.144705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.144772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.145054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.145121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.145398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.145464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.145718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.145783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.146006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.146071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.146373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.146440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.146623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.146687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.859 [2024-12-09 10:39:41.146976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.859 [2024-12-09 10:39:41.147041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.859 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.147286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.147354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.147598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.147663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.147911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.147978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.148239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.148305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.148506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.148574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.148825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.148890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.149105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.149184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.149428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.149493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.149760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.149826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.150049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.150115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.150355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.150420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.150681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.150745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.150957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.151021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.151255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.151325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.151583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.151648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.151948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.152013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.152267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.152332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.152630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.152694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.152938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.153003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.153212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.153280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.153571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.153636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.153925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.153990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.154246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.154324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.154616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.154682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.154898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.154963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.155217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.155283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.155578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.155642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.155902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.155966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.860 [2024-12-09 10:39:41.156222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.860 [2024-12-09 10:39:41.156288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.860 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.156575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.156641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.156891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.156955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.157192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.157258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.157479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.157544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.157800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.157864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.158121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.158199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.158456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.158523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.158790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.158855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.159112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.159193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.159456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.159522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.159781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.159845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.160063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.160128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.160410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.160475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.160701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.160765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.161065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.161130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.161400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.161465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.161688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.161752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.161956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.162023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.162285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.162352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.162634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.162699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.163003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.163069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.163318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.163385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.163586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.163653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.163920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.163986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.164191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.164258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.164465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.164530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.164725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.164791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.165076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.165155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.165379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.165444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.861 [2024-12-09 10:39:41.165697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.861 [2024-12-09 10:39:41.165764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.861 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.165995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.166060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.166362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.166428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.166637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.166702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.166904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.166982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.167234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.167300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.167525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.167589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.167818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.167882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.168120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.168201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.168455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.168520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.168782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.168847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.169055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.169119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.169350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.169414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.169655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.169721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.169976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.170044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.170315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.170383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.170623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.170688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.170907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.170972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.171253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.171321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.171572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.171640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.171929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.171994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.172198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.172264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.172558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.172625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.172824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.172891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.173099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.173180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.173435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.173503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.173722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.173789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.174015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.174081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.174312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.174378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.862 [2024-12-09 10:39:41.174636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.862 [2024-12-09 10:39:41.174701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.862 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.174951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.175019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.175279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.175347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.175638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.175702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.175963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.176027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.176284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.176351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.176589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.176654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.176944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.177008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.177207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.177278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.177563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.177628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.177831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.177898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.178120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.178202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.178453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.178519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.178715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.178782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.178992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.179058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.179302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.179369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.179602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.179666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.179841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.179906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.180189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.180255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.180511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.180579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.180850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.180915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.181125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.181200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.181495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.181560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.181769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.181836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.182104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.182183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.182474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.182539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.182741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.182807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.183048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.183114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.183375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.863 [2024-12-09 10:39:41.183441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.863 qpair failed and we were unable to recover it. 00:29:08.863 [2024-12-09 10:39:41.183702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.183770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.184024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.184088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.184342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.184409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.184654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.184720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.184931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.184996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.185284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.185352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.185594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.185662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.185923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.185988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.186220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.186286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.186576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.186641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.186853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.186919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.187195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.187262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.187498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.187563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.187851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.187927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.188135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.188213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.188454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.188520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.188769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.188837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.189059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.189123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.189343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.189412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.189672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.189739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.190007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.190072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.190368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.190437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.190686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.190752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.191009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.191074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.191291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.191358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.191610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.191675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.191895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.191960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.192201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.192273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.192523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.864 [2024-12-09 10:39:41.192591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.864 qpair failed and we were unable to recover it. 00:29:08.864 [2024-12-09 10:39:41.192838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.192907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.193122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.193202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.193430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.193495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.193685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.193751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.193978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.194044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.194345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.194411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.194700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.194765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.195046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.195112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.195388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.195460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.195726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.195792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.196052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.196117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.196361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.196428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.196673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.196738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.197022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.197087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.197347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.197415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.197620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.197685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.197933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.198000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.198214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.198242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.198340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.198366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.198481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.198507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.198590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.198616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.198711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.198737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.198824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.198850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.865 [2024-12-09 10:39:41.198970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.865 [2024-12-09 10:39:41.198997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.865 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.199086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.199120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.199231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.199271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.199392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.199420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.199540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.199593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.199783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.199847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.200080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.200117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.200274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.200300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.200420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.200472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.200680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.200740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.200857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.200925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.201165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.201192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.201288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.201314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.201400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.201426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.201549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.201613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.201907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.201973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.202179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.202205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.202327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.202353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.202462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.202499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.202619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.202656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.202805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.202842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.203086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.203112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.203234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.203261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.203351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.203376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.203579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.203643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.203875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.203939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.204155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.204222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.204337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.204365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.204480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.204535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.866 [2024-12-09 10:39:41.204666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.866 [2024-12-09 10:39:41.204703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.866 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.204971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.205035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.205266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.205293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.205386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.205411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.205528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.205565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.205699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.205724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.205940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.206002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.206212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.206238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.206325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.206349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.206483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.206520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.206704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.206765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.207016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.207042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.207153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.207178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.207268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.207294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.207383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.207408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.207486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.207510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.207591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.207616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.207691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.207716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.207817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.207925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.208158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.208223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.208343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.208369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.208574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.208638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.208870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.208930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.209181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.209209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.209298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.209325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.209420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.209500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.209722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.209796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.210046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.210072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.210165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.210191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.867 [2024-12-09 10:39:41.210301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.867 [2024-12-09 10:39:41.210326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.867 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.210411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.210436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.210583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.210618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.210852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.210907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.211174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.211233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.211328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.211355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.211448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.211476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.211563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.211590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.211681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.211708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.211801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.211885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.212088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.212169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.212297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.212323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.212436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.212462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.212541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.212620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.212860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.212923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.213185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.213211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.213328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.213353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.213441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.213466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.213552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.213578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.213677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.213718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.213866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.213936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.214171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.214225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.214360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.214386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.214474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.214500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.214582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.214616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.214716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.214744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.214972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.215036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.215240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.215266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.215356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.868 [2024-12-09 10:39:41.215382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.868 qpair failed and we were unable to recover it. 00:29:08.868 [2024-12-09 10:39:41.215502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.215578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.215819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.215884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.216172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.216219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.216297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.216322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.216410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.216435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.216523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.216548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.216624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.216649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.216739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.216764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.216855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.216883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.216976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.217003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.217091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.217117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.217210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.217237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.217356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.217382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.217468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.217494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.217576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.217601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.217738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.217803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.217999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.218025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.218115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.218146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.218225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.218251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.218340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.218365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.218455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.218483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.218639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.218676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.218893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.218963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.219192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.219221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.219307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.219357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.219551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.219609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.219848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.219907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.220185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.220223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.220345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.869 [2024-12-09 10:39:41.220380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.869 qpair failed and we were unable to recover it. 00:29:08.869 [2024-12-09 10:39:41.220518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.220553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.220703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.220739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.220859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.220894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.221114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.221164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.221290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.221330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.221485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.221546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.221743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.221803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.222051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.222089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.222249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.222287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.222473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.222533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.222716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.222776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.223005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.223065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.223286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.223324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.223471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.223535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.223831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.223894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.224232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.224270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.224399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.224455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.224666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.224728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.224968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.225022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.225217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.225255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.225416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.225482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.225674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.225736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.225954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.226014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.226275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.226313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.226430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.226508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.226760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.226799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.226953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.226990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.227118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.227169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.227422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.227460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.870 qpair failed and we were unable to recover it. 00:29:08.870 [2024-12-09 10:39:41.227624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.870 [2024-12-09 10:39:41.227680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.227862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.227925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.228196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.228258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.228443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.228501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.228665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.228709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.228919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.228980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.229199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.229261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.229472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.229532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.229797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.229856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.230094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.230166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.230364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.230424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.230591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.230651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.230893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.230929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.231081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.231121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.231320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.231382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.231610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.231671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.231885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.231923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.232047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.232085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.232230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.232271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.232429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.232466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.232692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.232730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.232845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.232881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.232986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.233023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.233174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.233212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.233322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.233360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.233538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.871 [2024-12-09 10:39:41.233599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.871 qpair failed and we were unable to recover it. 00:29:08.871 [2024-12-09 10:39:41.233827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.233889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.234161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.234223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.234486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.234546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.234802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.234839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.234963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.235001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.235173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.235238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.235479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.235516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.235664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.235702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.235856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.235920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.236160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.236224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.236463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.236524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.236703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.236763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.237015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.237053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.237182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.237220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.237471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.237510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.237628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.237666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.237847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.237912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.238125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.238188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.238346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.238390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.238629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.238693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.238835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.238872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.239069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.239106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.239252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.239290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.239520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.239581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.239822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.239882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.240085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.240198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.240457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.240495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.240626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.872 [2024-12-09 10:39:41.240663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.872 qpair failed and we were unable to recover it. 00:29:08.872 [2024-12-09 10:39:41.240916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.240953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.241078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.241115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.241323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.241360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.241465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.241502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.241632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.241670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.241875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.241912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.242049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.242086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.242247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.242319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.242543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.242603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.242882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.242945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.243164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.243226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.243375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.243412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.243542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.243581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.243708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.243746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.243875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.243912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.244075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.244111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.244294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.244346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.244507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.244567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.244753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.244816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.245040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.245099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.245300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.245335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.245536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.245595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.245837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.245871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.245993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.246025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.246252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.246296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.246459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.246538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.246802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.246874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.247161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.247224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.247375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.247418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.873 [2024-12-09 10:39:41.247707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.873 [2024-12-09 10:39:41.247740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.873 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.247858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.247891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.248064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.248124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.248309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.248350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.248548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.248606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.248820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.248877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.249110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.249181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.249401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.249435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.249539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.249569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.249777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.249834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.250067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.250123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.250356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.250389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.250520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.250552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.250731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.250790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.251054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.251113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.251298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.251348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.251533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.251566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.251692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.251724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.251864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.251917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.252038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.252072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.252269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.252313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.252482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.252539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.252717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.252775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.253025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.253059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.253172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.253205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.253326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.253358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.253640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.253683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.253820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.253891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.254111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.254197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.254353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.874 [2024-12-09 10:39:41.254395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.874 qpair failed and we were unable to recover it. 00:29:08.874 [2024-12-09 10:39:41.254674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.254733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.254927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.254986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.255224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.255267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.255394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.255457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.255738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.255771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.255907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.255940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.256127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.256170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.256293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.256326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.256468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.256524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.256789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.256847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.257118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.257204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.257351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.257393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.257629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.257704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.257874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.257930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.258052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.258085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.258288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.258331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.258485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.258547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.258782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.258816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.258916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.258948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.259071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.259104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.259317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.259360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.259558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.259616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.259787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.259845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.260091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.260124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.260271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.260304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.260430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.260463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.260658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.260700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.260885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.260918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.261030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.261062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.875 qpair failed and we were unable to recover it. 00:29:08.875 [2024-12-09 10:39:41.261194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.875 [2024-12-09 10:39:41.261237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.261363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.261405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.261624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.261683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.261933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.261996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.262260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.262304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.262480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.262523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.262725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.262759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.262887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.262920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.263095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.263164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.263338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.263391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.263528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.263566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.263822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.263881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.264178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.264223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.264404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.264471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.264657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.264716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.264950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.265009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.265262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.265324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.265538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.265604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.265812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.265857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.266023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.266083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.266284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.266329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.266477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.266520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.266668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.266712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.266936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.267012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.267262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.267299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.267416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.267444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.267534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.267560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.267650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.267675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.267796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.267840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.267960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.876 [2024-12-09 10:39:41.267985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.876 qpair failed and we were unable to recover it. 00:29:08.876 [2024-12-09 10:39:41.268076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-12-09 10:39:41.268103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-12-09 10:39:41.268196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-12-09 10:39:41.268223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-12-09 10:39:41.268306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-12-09 10:39:41.268332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-12-09 10:39:41.268416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-12-09 10:39:41.268442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-12-09 10:39:41.268540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-12-09 10:39:41.268577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:08.877 [2024-12-09 10:39:41.268700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:08.877 [2024-12-09 10:39:41.268796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:08.877 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.268955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.268998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.269177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.269222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.269318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.269345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.269523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.269549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.269635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.269660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.269754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.269809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.269993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.270051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.270216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.270242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.270333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.270359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.270470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.270505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.270618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.270652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.270761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.270794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.270919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.270952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.271049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.271081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.271220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.271246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.271373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.271415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.271632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.271700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.271892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.271954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.272172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.272199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.272287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.272314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.272408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.272451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.272667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.272739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.272941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.272975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.273086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.273122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.273272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.273299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.273389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.273415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.273519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.273557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.273717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.273783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.273970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.274038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.274236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.274263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.274374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.274400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.274635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.274672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.274829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.274892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.275117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.275214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.275310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.275336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.275446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.275502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.275692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.275748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.152 [2024-12-09 10:39:41.275956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.152 [2024-12-09 10:39:41.276012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.152 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.276210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.276237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.276356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.276382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.276490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.276535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.276711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.276737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.276822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.276848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.276946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.277003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.277221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.277247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.277364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.277393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.277545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.277600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.277784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.277839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.278099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.278124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.278207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.278233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.278322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.278347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.278479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.278522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.278723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.278766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.278925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.278969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.279160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.279214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.279337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.279368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.279484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.279510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.279622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.279649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.279789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.279845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.280011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.280084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.280235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.280262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.280357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.280383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.280466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.280517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.280737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.280763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.280940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.280995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.281132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.281191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.281282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.281308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.281413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.281439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.281533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.281604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.281863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.281934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.282111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.282204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.282295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.282320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.282467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.282530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.282784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.282837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.283041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.283095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.283243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.153 [2024-12-09 10:39:41.283271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.153 qpair failed and we were unable to recover it. 00:29:09.153 [2024-12-09 10:39:41.283360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.283386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.283518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.283566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.283680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.283726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.283897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.283960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.284132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.284201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.284294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.284320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.284425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.284518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.284709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.284766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.284956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.285008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.285221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.285247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.285344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.285370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.285449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.285475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.285667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.285712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.285910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.285962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.286176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.286245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.286346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.286374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.286550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.286601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.286793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.286836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.287005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.287052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.287233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.287260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.287354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.287380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.287524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.287576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.287788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.287832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.288012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.288059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.288282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.288308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.288390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.288417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.288513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.288539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.288628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.288654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.288739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.288765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.288854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.288880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.288973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.289033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.289259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.289305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.289453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.289496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.289636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.289681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.289917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.289969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.290197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.290267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.290440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.290489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.290673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.154 [2024-12-09 10:39:41.290729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.154 qpair failed and we were unable to recover it. 00:29:09.154 [2024-12-09 10:39:41.290965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.291018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.291235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.291282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.291443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.291492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.291640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.291689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.291880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.291945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.292128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.292207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.292388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.292453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.292659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.292710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.292907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.292972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.293234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.293282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.293499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.293551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.293718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.293770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.293915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.293967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.294119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.294200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.294359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.294405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.294675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.294722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.294928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.294987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.295213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.295261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.295418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.295486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.295722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.295787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.296036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.296095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.296343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.296390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.296589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.296642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.296870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.296928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.297200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.297249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.297401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.297447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.297577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.297623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.297805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.297875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.298083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.298134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.298361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.298410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.298645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.298698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.298940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.299006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.299225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.299272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.299447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.299502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.299701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.299754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.299938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.299990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.300151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.300217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.300394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.155 [2024-12-09 10:39:41.300459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.155 qpair failed and we were unable to recover it. 00:29:09.155 [2024-12-09 10:39:41.300674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.300727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.300942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.300994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.301202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.301251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.301405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.301452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.301595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.301641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.301923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.301975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.302201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.302248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.302461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.302513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.302751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.302805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.302987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.303038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.303248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.303300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.303443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.303486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.303688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.303750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.303906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.303978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.304196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.304244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.304451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.304504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.304762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.304805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.304992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.305046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.305241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.305288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.305474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.305530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.305777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.305834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.306072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.306128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.306454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.306500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.306766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.306818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.307079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.307125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.307380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.307437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.307632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.307686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.307896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.307938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.308147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.308211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.308387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.308433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.308616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.308682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.308872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.308924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.309181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.156 [2024-12-09 10:39:41.309239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.156 qpair failed and we were unable to recover it. 00:29:09.156 [2024-12-09 10:39:41.309458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.309517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.309759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.309821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.310001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.310067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.310281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.310333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.310548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.310604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.310857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.310913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.311129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.311190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.311400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.311452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.311657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.311709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.311923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.311966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.312167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.312211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.312400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.312447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.312651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.312705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.312916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.312969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.313188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.313241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.313415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.313470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.313676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.313729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.313979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.314030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.314268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.314315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.314466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.314512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.314720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.314773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.314936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.314989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.315176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.315230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.315427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.315480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.315649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.315702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.315915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.315968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.316131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.316192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.316361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.316413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.316584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.316638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.316831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.316883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.317087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.317161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.317383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.317436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.317651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.317703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.317896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.317948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.318185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.318238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.318382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.318427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.318571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.318632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.318785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.157 [2024-12-09 10:39:41.318837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.157 qpair failed and we were unable to recover it. 00:29:09.157 [2024-12-09 10:39:41.319006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.319084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.319330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.319378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.319538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.319619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.319933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.320005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.320252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.320300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.320501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.320562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.320739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.320828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.321052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.321106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.321341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.321388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.321570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.321635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.321823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.321875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.322044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.322115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.322387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.322433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.322625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.322692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.322905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.322962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.323217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.323274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.323439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.323496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.323681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.323737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.323988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.324045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.324250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.324319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.324535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.324592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.324822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.324868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.325027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.325073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.325280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.325337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.325588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.325644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.325875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.325922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.326128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.326220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.326404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.326462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.326691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.326738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.326904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.326981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.327223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.327281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.327535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.327581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.327770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.327826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.328058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.328104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.328329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.328387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.328597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.328652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.328915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.158 [2024-12-09 10:39:41.328960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.158 qpair failed and we were unable to recover it. 00:29:09.158 [2024-12-09 10:39:41.329178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.329250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.329460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.329516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.329757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.329812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.330030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.330085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.330342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.330389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.330565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.330636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.330899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.330944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.331183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.331242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.331469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.331525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.331760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.331817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.331993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.332048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.332316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.332363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.332561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.332619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.332869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.332915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.333173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.333230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.333451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.333507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.333731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.333786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.334055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.334101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.334323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.334382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.334602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.334648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.334788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.334833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.335105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.335162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.335317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.335372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.335546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.335592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.335743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.335813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.336029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.336087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.336317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.336375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.336575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.336630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.336866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.336914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.337101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.337169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.337407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.337462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.337681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.337727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.337916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.337996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.338237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.338298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.338544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.338604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.338867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.338913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.339060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.339105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.339263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.159 [2024-12-09 10:39:41.339309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.159 qpair failed and we were unable to recover it. 00:29:09.159 [2024-12-09 10:39:41.339464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.339512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.339724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.339779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.339998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.340054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.340281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.340340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.340598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.340653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.340902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.340957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.341217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.341274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.341451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.341507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.341711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.341768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.342017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.342072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.342293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.342349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.342610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.342670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.342927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.342986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.343186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.343248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.343425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.343486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.343711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.343766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.343958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.344004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.344166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.344212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.344486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.344543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.344782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.344839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.345036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.345094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.345420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.345507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.345801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.345862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.346091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.346172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.346373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.346440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.346629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.346682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.346929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.346981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.347244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.347289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.347486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.347540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.347753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.347814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.347971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.348016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.348242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.348304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.348474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.348541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.348751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.348803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.349005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.349057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.349260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.349313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.349525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.349568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.349717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.160 [2024-12-09 10:39:41.349759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.160 qpair failed and we were unable to recover it. 00:29:09.160 [2024-12-09 10:39:41.349988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.350041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.350251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.350304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.350547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.350598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.350852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.350903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.351075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.351122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.351290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.351359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.351615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.351679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.351921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.351984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.352177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.352229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.352430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.352475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.352664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.352746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.352930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.352983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.353186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.353239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.353456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.353530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.353741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.353807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.354051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.354096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.354261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.354335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.354577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.354640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.354917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.354962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.355164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.355241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.355464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.355527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.355811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.355875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.356137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.356195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.356344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.356389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.356604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.356668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.356848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.356899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.357084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.357165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.357461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.357526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.357749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.357799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.358041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.358086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.358283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.358349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.358538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.358615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.358812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.358863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.359031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.161 [2024-12-09 10:39:41.359081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.161 qpair failed and we were unable to recover it. 00:29:09.161 [2024-12-09 10:39:41.359284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.359348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.359534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.359579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.359763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.359827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.360070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.360121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.360340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.360391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.360557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.360607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.360847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.360898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.361112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.361210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.361467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.361531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.361816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.361880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.362066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.362116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.362352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.362416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.362666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.362729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.362982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.363031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.363241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.363294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.363459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.363539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.363778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.363841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.364040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.364090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.364313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.364358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.364515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.364559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.364734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.364819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.365038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.365083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.365271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.365343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.365595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.365640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.365773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.365817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.366004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.366053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.366239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.366302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.366554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.366617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.366884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.366947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.367157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.367236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.367420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.367471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.367673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.367736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.367935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.367989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.368187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.368255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.368517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.368582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.368766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.368817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.369013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.369063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.162 [2024-12-09 10:39:41.369369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.162 [2024-12-09 10:39:41.369421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.162 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.369654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.369704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.369932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.369983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.370222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.370275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.370446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.370498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.370712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.370767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.370947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.371022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.371223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.371279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.371459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.371514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.371721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.371776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.371942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.372006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.372208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.372263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.372442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.372496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.372708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.372762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.373012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.373066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.373341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.373397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.373569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.373626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.373854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.373909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.374120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.374194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.374439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.374495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.374704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.374758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.374949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.375029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.375316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.375374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.375581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.375636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.375854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.375910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.376098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.376179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.376421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.376475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.376690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.376744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.377016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.377080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.377321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.377375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.377622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.377677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.377892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.377946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.378171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.378227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.378387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.378439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.378646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.378700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.378949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.379011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.379243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.379301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.379597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.379666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.379883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.163 [2024-12-09 10:39:41.379938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.163 qpair failed and we were unable to recover it. 00:29:09.163 [2024-12-09 10:39:41.380120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.380188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.380410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.380465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.380680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.380734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.380981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.381044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.381338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.381393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.381609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.381663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.381835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.381889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.382158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.382235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.382436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.382490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.382738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.382793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.382975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.383028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.383209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.383265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.383502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.383562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.383828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.383887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.384190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.384250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.384486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.384544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.384736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.384790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.385055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.385118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.385386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.385441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.385690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.385748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.386000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.386064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.386342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.386402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.386633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.386691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.386909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.386969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.387182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.387242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.387505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.387563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.387808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.387867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.388129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.388199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.388381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.388441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.388650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.388709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.388963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.389026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.389316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.389381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.389647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.389727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.390012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.390092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.390351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.390415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.390705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.390784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.390999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.391058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.391291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.391351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.164 [2024-12-09 10:39:41.391600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.164 [2024-12-09 10:39:41.391659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.164 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.391931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.392021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.392309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.392373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.392576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.392636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.392871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.392931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.393174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.393235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.393471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.393530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.393759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.393817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.394039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.394097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.394349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.394407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.394635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.394695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.394881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.394940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.395116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.395191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.395432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.395491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.395749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.395821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.396052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.396111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.396376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.396434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.396640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.396698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.396904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.396962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.397184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.397244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.397515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.397574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.397818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.397876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.398064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.398122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.398349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.398408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.398679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.398737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.398946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.399003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.399226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.399285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.399549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.399607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.399898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.399957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.400185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.400244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.400455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.400513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.400738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.400795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.401025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.401083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.401349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.401408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.165 qpair failed and we were unable to recover it. 00:29:09.165 [2024-12-09 10:39:41.401676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.165 [2024-12-09 10:39:41.401733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.401971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.402030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.402268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.402329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.402522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.402579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.402838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.402896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.403121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.403194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.403459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.403518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.403719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.403783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.404007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.404065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.404311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.404370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.404630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.404687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.404906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.404968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.405197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.405261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.405545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.405609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.405905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.405970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.406251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.406310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.406579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.406636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.406822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.406881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.407158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.407257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.407506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.407565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.407790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.407861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.408136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.408208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.408440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.408498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.408754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.408811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.409009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.409068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.409298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.409358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.409581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.409642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.409912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.409970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.410244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.410305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.410502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.410563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.410819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.410878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.411100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.411172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.411432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.411496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.411750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.411813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.412108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.412186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.412460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.412524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.412735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.412799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.166 qpair failed and we were unable to recover it. 00:29:09.166 [2024-12-09 10:39:41.412998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.166 [2024-12-09 10:39:41.413064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.413295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.413360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.413609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.413673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.413914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.413977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.414174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.414238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.414523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.414587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.414828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.414891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.415168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.415233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.415479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.415542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.415797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.415860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.416047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.416124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.416367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.416431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.416692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.416756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.417049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.417112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.417392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.417458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.417706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.417769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.417973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.418038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.418306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.418372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.418592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.418656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.418929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.418994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.419292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.419358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.419549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.419614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.419850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.419914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.420112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.420195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.420466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.420532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.420778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.420843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.421132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.421210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.421471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.421535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.421746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.421812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.422068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.422132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.422361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.422424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.422634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.422697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.422937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.423000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.423243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.423311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.423499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.423562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.423805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.423867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.424160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.424225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.424450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.167 [2024-12-09 10:39:41.424517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.167 qpair failed and we were unable to recover it. 00:29:09.167 [2024-12-09 10:39:41.424812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.424875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.425074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.425137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.425459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.425523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.425772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.425835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.426112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.426189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.426493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.426556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.426777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.426840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.427038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.427101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.427392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.427453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.427695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.427759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.428030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.428091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.428399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.428498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.428771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.428855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.429115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.429204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.429491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.429558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.429816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.429882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.430174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.430241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.430482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.430549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.430811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.430881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.431125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.431207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.431493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.431560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.431854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.431918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.432194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.432266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.432543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.432612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.432901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.432967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.433248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.433315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.433602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.433668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.433870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.433936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.434162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.434231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.434485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.434553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.434788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.434853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.435085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.435164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.435397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.435466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.435705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.435770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.436006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.436073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.436379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.436446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.436664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.436732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.168 [2024-12-09 10:39:41.437016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.168 [2024-12-09 10:39:41.437082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.168 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.437398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.437466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.437733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.437803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.438087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.438173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.438434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.438500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.438801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.438866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.439129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.439208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.439453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.439519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.439764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.439830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.440038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.440103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.440370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.440435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.440691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.440757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.441009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.441077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.441401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.441467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.441709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.441776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.442023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.442102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.442414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.442481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.442671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.442735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.443023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.443088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.443362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.443427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.443723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.443789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.444038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.444103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.444399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.444465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.444709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.444778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.445062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.445128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.445416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.445487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.445778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.445842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.446094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.446196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.446487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.446553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.446819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.446883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.447174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.447241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.447531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.447599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.447848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.447913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.448221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.448287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.448580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.448646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.448855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.448920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.449181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.449248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.169 qpair failed and we were unable to recover it. 00:29:09.169 [2024-12-09 10:39:41.449497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.169 [2024-12-09 10:39:41.449563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.449797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.449863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.450114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.450196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.450483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.450549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.450756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.450823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.451092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.451175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.451470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.451535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.451789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.451853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.452061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.452129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.452445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.452510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.452763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.452828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.453109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.453204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.453427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.453492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.453783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.453848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.454067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.454136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.454408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.454474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.454694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.454761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.455050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.455115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.455390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.455473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.455778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.455843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.456160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.456227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.456438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.456505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.456797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.456862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.457168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.457235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.457478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.457543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.457796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.457861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.458129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.458212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.458425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.458491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.458681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.458754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.459045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.459111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.459384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.459460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.459684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.459749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.459972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.170 [2024-12-09 10:39:41.460037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.170 qpair failed and we were unable to recover it. 00:29:09.170 [2024-12-09 10:39:41.460289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.460357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.460619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.460684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.460923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.460989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.461231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.461300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.461535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.461604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.461827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.461893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.462086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.462168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.462467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.462533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.462826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.462890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.463180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.463247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.463760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.463826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.464137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.464216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.464481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.464550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.464798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.464863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.465062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.465126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.465374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.465446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.465690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.465757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.465996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.466063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.466361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.466429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.466718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.466783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.467081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.467163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.467415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.467480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.467735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.467803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.468046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.468112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.468376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.468443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.468731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.468808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.469097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.469192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.469449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.469515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.469814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.469879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.470182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.470248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.470494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.470559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.470842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.470907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.471166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.471235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.471525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.471592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.471836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.471901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.472160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.472228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.472470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.472536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.472785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.171 [2024-12-09 10:39:41.472851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.171 qpair failed and we were unable to recover it. 00:29:09.171 [2024-12-09 10:39:41.473090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.473168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.473439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.473505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.473711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.473780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.474019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.474084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.474350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.474418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.474664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.474731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.475020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.475083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.475382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.475708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.475773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.475975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.476038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.476336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.476402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.476687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.476752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.477008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.477072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.477314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.477380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.477676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.477742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.477982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.478047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.478349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.478415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.478719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.478785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.479025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.479090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.479388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.479453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.479739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.479804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.480057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.480121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.480384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.480449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.480745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.480809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.481048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.481112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.481382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.481457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.481705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.481770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.481991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.482070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.482375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.482442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.482702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.482765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.483034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.483099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.483337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.483402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.483652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.483716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.484006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.484071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.484294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.484360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.484644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.484708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.484953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.485019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.172 [2024-12-09 10:39:41.485305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.172 [2024-12-09 10:39:41.485372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.172 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.485586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.485653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.485896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.485964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.486211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.486280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.486580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.486646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.486894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.486963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.487266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.487336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.487618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.487684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.487928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.487995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.488249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.488318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.488588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.488654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.488900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.488966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.489178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.489245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.489520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.489586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.489847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.489915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.490167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.490236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.490487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.490555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.490823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.490890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.491113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.491192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.491389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.491456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.491720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.491788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.491990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.492056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.492333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.492401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.492659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.492725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.492981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.493048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.493350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.493417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.493724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.493791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.494037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.494103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.494426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.494493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.494747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.494814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.495079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.495183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.495441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.495507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.495729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.495797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.496039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.496108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.496327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.496393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.496625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.496691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.496907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.496972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.497186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.173 [2024-12-09 10:39:41.497253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.173 qpair failed and we were unable to recover it. 00:29:09.173 [2024-12-09 10:39:41.497540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.497604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.497856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.497921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.498151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.498218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.498519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.498584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.498827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.498892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.499158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.499225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.499479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.499544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.499791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.499858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.500188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.500255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.500521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.500588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.500793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.500860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.501087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.501167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.501452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.501517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.501769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.501834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.502130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.502206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.502443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.502507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.502754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.502822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.503111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.503189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.503438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.503506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.503776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.503842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.504068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.504133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.504459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.504532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.504776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.504842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.505104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.505189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.505481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.505547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.505768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.505833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.506070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.506134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.506430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.506495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.506791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.506857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.507102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.507182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.507419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.507492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.507786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.507851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.508098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.508193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.508446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.508512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.174 qpair failed and we were unable to recover it. 00:29:09.174 [2024-12-09 10:39:41.508774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.174 [2024-12-09 10:39:41.508840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.509135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.509221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.509465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.509529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.509774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.509842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.510043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.510109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.510406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.510470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.510736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.510801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.511045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.511111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.511389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.511465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.511760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.511825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.512078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.512171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.512402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.512467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.512697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.512763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.513051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.513114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.513386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.513453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.513703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.513768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.514011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.514075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.514330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.514395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.514681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.514746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.514992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.515056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.515287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.515353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.515620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.515683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.515916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.515984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.516268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.516334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.516587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.516652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.516865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.516933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.517240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.517308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.517528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.517593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.517818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.517884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.518156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.518223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.518516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.518581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.518860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.518925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.519167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.519236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.519525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.519591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.519799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.519866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.520071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.520167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.520443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.520508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.175 [2024-12-09 10:39:41.520798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.175 [2024-12-09 10:39:41.520863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.175 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.521110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.521201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.521495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.521560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.521808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.521872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.522110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.522187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.522478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.522543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.522750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.522816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.523106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.523184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.523426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.523492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.523785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.523850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.524088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.524200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.524462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.524530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.524750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.524817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.525036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.525103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.525417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.525483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.525780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.525845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.526110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.526194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.526451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.526516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.526797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.526861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.527110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.527193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.527466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.527530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.527822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.527888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.528185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.528254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.528503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.528568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.528855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.528919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.529169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.529235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.529491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.529558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.529845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.529909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.530180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.530250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.530515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.530581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.530774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.530838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.531125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.531213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.531424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.531492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.531771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.531836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.532076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.532175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.532460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.532525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.532735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.532799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.533040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.176 [2024-12-09 10:39:41.533105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.176 qpair failed and we were unable to recover it. 00:29:09.176 [2024-12-09 10:39:41.533379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.533445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.533691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.533755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.533929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.533993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.534209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.534289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.534577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.534642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.534893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.534960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.535202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.535268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.535560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.535626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.535876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.535941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.536193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.536259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.536487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.536551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.536784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.536849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.537063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.537129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.537389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.537454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.537669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.537738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.537983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.538048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.538347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.538414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.538722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.538789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.539038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.539104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.539411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.539475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.539772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.539837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.540124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.540213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.540485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.540550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.540770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.540836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.541105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.541188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.541484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.541548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.541791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.541860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.542122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.542202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.542450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.542517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.542777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.542842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.543094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.543174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.543463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.543529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.543730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.543800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.544056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.544122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.544399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.544464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.544707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.544772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.545027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.545091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.177 qpair failed and we were unable to recover it. 00:29:09.177 [2024-12-09 10:39:41.545442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.177 [2024-12-09 10:39:41.545540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.545816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.545882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.546197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.546274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.546579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.546644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.546897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.546960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.547223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.547289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.547577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.547642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.547955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.548020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.548338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.548408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.548708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.548771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.549053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.549117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.549402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.549466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.549770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.549833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.550076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.550161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.550411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.550481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.550735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.550802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.551066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.551131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.551443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.551508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.551808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.551872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.552129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.552211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.552476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.552540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.552842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.552905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.553201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.553269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.553525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.553592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.553878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.553942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.554244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.554311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.554603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.554667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.554925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.554989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.555241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.555308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.555604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.555680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.555897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.555962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.556266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.556331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.556621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.556685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.556951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.557015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.557313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.557380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.557661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.557725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.557977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.558041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.558310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.558376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.178 qpair failed and we were unable to recover it. 00:29:09.178 [2024-12-09 10:39:41.558591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.178 [2024-12-09 10:39:41.558655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.558922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.558987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.559209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.559276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.559504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.559568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.559859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.559923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.560130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.560210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.560408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.560471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.560768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.560842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.561107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.561185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.561486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.561566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.561852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.561916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.562223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.562288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.562589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.562658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.562915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.562989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.563280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.563346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.563606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.563671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.563912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.563978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.564236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.564302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.564566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.564630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.564921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.564995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.565298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.565364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.565626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.565691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.565960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.566023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.566307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.566373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.566664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.566729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.566935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.567001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.567250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.567317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.567626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.567700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.567884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.567949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.568159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.568226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.568530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.568594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.568846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.568910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.569200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.569267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.179 [2024-12-09 10:39:41.569529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.179 [2024-12-09 10:39:41.569594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.179 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.569848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.569912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.570203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.570270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.570557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.570632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.570905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.570969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.571225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.571293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.571591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.571657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.571860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.571927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.572190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.572264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.572505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.572599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.572977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.573054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.573336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.573403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.573699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.573765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.574050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.574116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.574392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.574467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.574767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.574860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.575222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.575316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.575596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.575663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.575907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.575973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.576233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.576298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.576579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.576644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.576894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.576963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.577269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.577365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.180 [2024-12-09 10:39:41.577690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.180 [2024-12-09 10:39:41.577762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.180 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.578025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.578092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.578317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.578383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.578650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.578718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.579008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.579073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.579356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.579422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.579667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.579733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.579969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.580047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.580278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.580345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.580574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.580640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.580929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.580994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.581293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.581359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.581569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.581634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.581921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.581986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.582244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.582310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.582601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.582665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.582914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.582979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.583199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.583266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.583549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.583614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.583881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.583947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.584192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.584259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.584562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.584627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.584920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.584986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.585195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.585264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.585495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.585560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.585851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.585916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.586181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.586247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.586499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.586563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.586858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.586923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.587173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.587240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.587531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.587595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.587850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.587915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.588180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.588246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.588525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.588590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.588834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.588900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.589123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.589212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.589421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.589486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.589722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.589786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.589973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.462 [2024-12-09 10:39:41.590038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.462 qpair failed and we were unable to recover it. 00:29:09.462 [2024-12-09 10:39:41.590292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.590360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.590626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.590692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.590900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.590965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.591215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.591282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.591587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.591652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.591900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.591964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.592259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.592325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.592561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.592625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.592879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.592954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.593264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.593331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2657217 Killed "${NVMF_APP[@]}" "$@" 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.593577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.593641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.593856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.593921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:09.463 [2024-12-09 10:39:41.594158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.594233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.594488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:09.463 [2024-12-09 10:39:41.594552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.594803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.594868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.595093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.595176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:09.463 [2024-12-09 10:39:41.595425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.595495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.595799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.595865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.596169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.596243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.596499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.596564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.596792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.596858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.597085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.597181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.597354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.597388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.597518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.597553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.597693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.597728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.597937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.598001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.598267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.598333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.598602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.598637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.598779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.598814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.599037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.599101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.599382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.599459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.599709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.599774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.599992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.600057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.600276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.600311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.600476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.600511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.463 [2024-12-09 10:39:41.600625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.463 [2024-12-09 10:39:41.600658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.463 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.600768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.600801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.600945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.600979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.601075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.601108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.601243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.601279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.601418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.601452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.601687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.601752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.602017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2657777 00:29:09.464 [2024-12-09 10:39:41.602082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:09.464 [2024-12-09 10:39:41.602278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.602312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2657777 00:29:09.464 [2024-12-09 10:39:41.602461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.602497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2657777 ']' 00:29:09.464 [2024-12-09 10:39:41.602637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.602672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.464 [2024-12-09 10:39:41.602842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.602877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 wit 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:09.464 h addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.603023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.464 [2024-12-09 10:39:41.603059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.603193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:09.464 [2024-12-09 10:39:41.603227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.603347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.464 [2024-12-09 10:39:41.603382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.603533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.603568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.603712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.603747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.603882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.603917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.604032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.604065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.604201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.604235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.604370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.604404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.604550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.604585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.604732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.604764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.604902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.604933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.605056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.605086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.605216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.605249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.605370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.605401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.605503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.605533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.605672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.605703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.605831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.605864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.605967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.606000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.606151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.606185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.606300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.606333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.606443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.606476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.464 qpair failed and we were unable to recover it. 00:29:09.464 [2024-12-09 10:39:41.606659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.464 [2024-12-09 10:39:41.606697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.606859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.606892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.607003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.607035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.607156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.607190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.607330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.607363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.607502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.607534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.607644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.607678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.607795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.607828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.607935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.607968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.608107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.608148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.608273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.608304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.608428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.608460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.608555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.608586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.608694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.608725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.608835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.608867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.608973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.609005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.609136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.609174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.609276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.609308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.609440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.609471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.609598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.609629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.609745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.609776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.609917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.609949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.610079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.610111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.610240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.610272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.610386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.610417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.610541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.610572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.610673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.610705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.610838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.610870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.611009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.611041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.611169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.611217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.611326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.611356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.611515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.611545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.611678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.611709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.611833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.611864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.611977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.612007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.612101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.612131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.612240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.612271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.612377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.612407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.465 [2024-12-09 10:39:41.612532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.465 [2024-12-09 10:39:41.612562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.465 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.612689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.612718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.612848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.612878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.612979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.613009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.613147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.613178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.613278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.613308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.613403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.613434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.613586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.613617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.613716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.613746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.613884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.613914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.614024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.614054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.614183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.614214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.614313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.614343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.614479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.614509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.614634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.614664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.614792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.614822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.614963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.614994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.615126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.615167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.615311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.615340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.615425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.615455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.615590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.615619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.615744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.615773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.615869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.615899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.615998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.616027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.616123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.616160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.616269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.616298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.616417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.616445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.616568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.616597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.616695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.616724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.616818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.616846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.616935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.616969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.617066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.617096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.617206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.466 [2024-12-09 10:39:41.617236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.466 qpair failed and we were unable to recover it. 00:29:09.466 [2024-12-09 10:39:41.617362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.617391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.617507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.617536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.617664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.617693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.617798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.617827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.617954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.617983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.618106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.618135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.618279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.618308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.618425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.618452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.618543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.618571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.618663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.618690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.618814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.618842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.618939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.618967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.619112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.619147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.619234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.619262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.619356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.619384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.619479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.619507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.619607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.619635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.619726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.619754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.619908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.619936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.620028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.620056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.620168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.620197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.620292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.620320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.620445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.620473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.620589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.620617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.620708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.620742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.620860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.620888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.621011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.621039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.621162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.621191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.621284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.621312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.621435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.621463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.621556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.621584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.621676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.621704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.621803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.621831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.621927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.621955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.622050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.622078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.622197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.622225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.622339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.622367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.622510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.467 [2024-12-09 10:39:41.622537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.467 qpair failed and we were unable to recover it. 00:29:09.467 [2024-12-09 10:39:41.622660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.622695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.622868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.622899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.623012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.623039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.623161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.623189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.623309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.623338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.623442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.623469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.623597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.623637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.623785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.623838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.623947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.623977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.624071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.624103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.628163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.628198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.628314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.628342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.628473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.628501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.628608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.628641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.628732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.628759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.628853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.628881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.629004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.629032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.629120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.629153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.629275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.629302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.629433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.629467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.629558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.629586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.629703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.629732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.629831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.629860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.630009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.630038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.630133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.630169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.630262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.630290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.630388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.630415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.630568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.630597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.630714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.630742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.630845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.630874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.630991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.631026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.631149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.631177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.631308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.631356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.631505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.631543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.631686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.631725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.631849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.631877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.631993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.468 [2024-12-09 10:39:41.632019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.468 qpair failed and we were unable to recover it. 00:29:09.468 [2024-12-09 10:39:41.632102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.632128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.632261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.632288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.632406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.632433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.632536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.632568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.632688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.632714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.632808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.632844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.632959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.632993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.633208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.633236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.633323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.633349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.633472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.633498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.633581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.633607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.633699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.633724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.633836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.633862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.633956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.633983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.634097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.634123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.634244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.634271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.634365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.634392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.634535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.634562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.634679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.634706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.634828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.634854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.634946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.634973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.635082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.635109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.635211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.635237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.635329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.635355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.635438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.635465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.635544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.635570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.635680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.635706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.635812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.635838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.635927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.635952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.636033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.636059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.636153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.636184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.636274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.636300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.636386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.636413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.636493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.636520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.636612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.636638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.636783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.636810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.636934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.636960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.637038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.637065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.469 qpair failed and we were unable to recover it. 00:29:09.469 [2024-12-09 10:39:41.637170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.469 [2024-12-09 10:39:41.637207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.637297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.637324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.637403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.637430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.637526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.637552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.637637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.637663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.637771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.637798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.637897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.637933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.638006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.638033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.638110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.638137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.638256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.638283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.638398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.638424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.638512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.638538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.638652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.638678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.638793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.638820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.638901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.638928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.639092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.639119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.639213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.639240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.639325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.639356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.639439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.639465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.639574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.639605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.639720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.639746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.639858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.639884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.639968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.639994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.640109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.640135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.640231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.640258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.640375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.640402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.640515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.640542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.640654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.640681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.640776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.640802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.640879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.640906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.640995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.641021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.641132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.641166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.641254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.641280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.641371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.641399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.641484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.641511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.641655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.641682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.641801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.641827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.641905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.470 [2024-12-09 10:39:41.641932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.470 qpair failed and we were unable to recover it. 00:29:09.470 [2024-12-09 10:39:41.642016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.642043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.642151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.642188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.642294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.642320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.642415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.642442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.642556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.642582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.642669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.642696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.642805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.642832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.642945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.642971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.643092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.643118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.643245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.643272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.643386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.643412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.643521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.643547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.643688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.643714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.643827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.643853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.643980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.644007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.644091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.644118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.644214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.644241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.644352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.644378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.644458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.644484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.644588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.644614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.644699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.644725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.644817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.644844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.644973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.645013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.645112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.645160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.645281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.645308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.645394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.645422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.645550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.645577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.645690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.645717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.471 [2024-12-09 10:39:41.645794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.471 [2024-12-09 10:39:41.645820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.471 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.645909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.645935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.646018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.646045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.646136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.646189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.646331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.646358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.646466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.646504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.646591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.646618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.646755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.646780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.646898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.646925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.647042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.647069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.647161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.647188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.647276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.647303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.647415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.647441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.647520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.647547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.647632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.647659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.647762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.647788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.647867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.647893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.647998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.648024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.648111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.648146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.648241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.648269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.648358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.648386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.648504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.648532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.648669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.648696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.648817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.648843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.648956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.648982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.649088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.649115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.649203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.649229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.649367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.649393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.649492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.649519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.649606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.649632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.649747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.649775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.649870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.649896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.650002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.650028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.650117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.650151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.650265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.650291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.650383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.650410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.650519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.650546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.472 [2024-12-09 10:39:41.650658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.472 [2024-12-09 10:39:41.650684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.472 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.650812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.650838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.650925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.650951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.651044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.651070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.651162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.651189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.651277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.651304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.651422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.651449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.651537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.651563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.651706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.651732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.651865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.651892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.651976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.652002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.652096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.652124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.652259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.652286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.652402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.652427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.652539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.652573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.652689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.652715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.652835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.652861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.652974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.653000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.653110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.653151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.653280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.653306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.653421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.653457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.653543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.653575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.653671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.653697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.653813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.653841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.653960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.653991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.654074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.654100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.654205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.654232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.654322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.654348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.654454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.654480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.654562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.654588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.654666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.654693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.654769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.654795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.654907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.654933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.655074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.655100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.655198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.655225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.655315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.655344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.655440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.655467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.655555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.655581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.473 [2024-12-09 10:39:41.655673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.473 [2024-12-09 10:39:41.655699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.473 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.655808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.655834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.655917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.655943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.656024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.656050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.656134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.656165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.656250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.656276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.656391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.656417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.656528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.656554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.656633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.656659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.656750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.656776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.656861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.656887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.656967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.656993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.657076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.657104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.657202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.657233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.657348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.657375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.657462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.657489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.657572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.657598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.657676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.657702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.657789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.657815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.657930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.657956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.658099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.658125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.658222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.658249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.658338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.658364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.658477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.658503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.658613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.658639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.658752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.658778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.658862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.658889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.658977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.659004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.659096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.659123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.659240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.659267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.659347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.659374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.659482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.659508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.659623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.659649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.659741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.659767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.659767] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:29:09.474 [2024-12-09 10:39:41.659838] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.474 [2024-12-09 10:39:41.659842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.659867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.659977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.474 [2024-12-09 10:39:41.660002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.474 qpair failed and we were unable to recover it. 00:29:09.474 [2024-12-09 10:39:41.660093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.660119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.660220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.660246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.660323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.660348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.660463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.660490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.660568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.660594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.660720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.660746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.660858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.660884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.660976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.661002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.661153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.661179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.661271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.661297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.661384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.661410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.661532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.661558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.661643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.661671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.661760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.661787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.661867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.661893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.661988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.662014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.662121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.662156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.662276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.662302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.662414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.662440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.662527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.662562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.662672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.662697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.662815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.662841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.662942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.662968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.663050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.663076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.663183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.663211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.663328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.663355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.663467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.663493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.663575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.663601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.663717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.663743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.663828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.663854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.663969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.663996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.664107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.664133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.664256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.664282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.664369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.664396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.664544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.664570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.475 qpair failed and we were unable to recover it. 00:29:09.475 [2024-12-09 10:39:41.664661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.475 [2024-12-09 10:39:41.664687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.664805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.664830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.664942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.664968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.665060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.665086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.665180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.665208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.665290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.665316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.665431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.665457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.665542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.665568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.665685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.665716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.665862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.665888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.666002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.666029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.666149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.666176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.666258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.666284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.666372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.666398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.666516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.666542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.666625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.666653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.666733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.666760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.666847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.666877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.666964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.666991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.667134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.667169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.667285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.667311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.667389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.667416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.667529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.667556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.667637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.667664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.667764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.667791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.667893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.667919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.668007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.668033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.668116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.668149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.668261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.668287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.668399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.668425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.668517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.668542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.668626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.668652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.668755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.668780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.668870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.668896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.668992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.669042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.669136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.669189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.669331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.476 [2024-12-09 10:39:41.669362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.476 qpair failed and we were unable to recover it. 00:29:09.476 [2024-12-09 10:39:41.669485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.669511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.669601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.669627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.669713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.669739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.669848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.669876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.669960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.669986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.670071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.670096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.670205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.670231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.670344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.670370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.670537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.670563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.670657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.670683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.670797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.670822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.670969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.671018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.671134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.671168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.671246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.671272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.671370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.671396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.671480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.671507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.671596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.671622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.671697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.671723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.671836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.671862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.671939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.671966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.672059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.672086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.672203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.672233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.672348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.672375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.672486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.672513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.672591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.672618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.672729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.672760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.672848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.672875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.672985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.673012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.673136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.673169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.673259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.673286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.673404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.673430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.673556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.673582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.673665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.673691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.673820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.673847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.673988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.674014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.674112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.674155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.674264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.674290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.674409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.477 [2024-12-09 10:39:41.674435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.477 qpair failed and we were unable to recover it. 00:29:09.477 [2024-12-09 10:39:41.674528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.674554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.674673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.674700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.674845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.674871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.674984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.675010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.675123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.675155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.675243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.675269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.675384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.675409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.675522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.675549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.675664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.675691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.675777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.675814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.675902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.675929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.676077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.676103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.676193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.676219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.676310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.676337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.676484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.676511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.676609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.676635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.676718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.676744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.676842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.676868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.676982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.677008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.677089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.677116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.677219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.677245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.677363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.677390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.677537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.677563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.677658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.677684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.677828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.677854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.677972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.677998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.678114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.678145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.678256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.678282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.678406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.678433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.678560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.678585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.678696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.678722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.678835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.678861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.678951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.678976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.679066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.679091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.679179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.679206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.679348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.679375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.679463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.679490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.679603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.679629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.478 qpair failed and we were unable to recover it. 00:29:09.478 [2024-12-09 10:39:41.679748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.478 [2024-12-09 10:39:41.679774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.679888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.679914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.679999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.680026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.680160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.680215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.680380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.680420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.680528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.680556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.680640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.680667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.680804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.680830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.680918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.680954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.681039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.681066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.681179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.681207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.681289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.681316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.681400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.681427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.681513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.681540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.681638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.681666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.681768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.681797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.681913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.681944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.682049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.682089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.682246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.682276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.682393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.682420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.682537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.682564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.682644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.682670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.682759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.682787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.682870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.682897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.682988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.683017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.683159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.683187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.683267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.683293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.683389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.683415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.683510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.683536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.683627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.683653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.683770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.683796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.683909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.683936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.684023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.684051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.684145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.684173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.684300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.684328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.684414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.684453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.684577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.684603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.684745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.684771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.479 qpair failed and we were unable to recover it. 00:29:09.479 [2024-12-09 10:39:41.684855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.479 [2024-12-09 10:39:41.684881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.684993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.685021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.685135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.685167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.685280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.685306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.685409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.685435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.685523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.685553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.685669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.685695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.685779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.685807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.685923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.685950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.686064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.686090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.686186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.686213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.686298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.686324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.686464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.686494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.686607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.686633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.686721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.686748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.686857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.686884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.686978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.687005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.687121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.687160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.687282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.687308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.687427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.687453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.687538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.687566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.687682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.687710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.687808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.687837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.687953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.687980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.688094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.688121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.688218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.688245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.688333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.688360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.688480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.688507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.688595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.688621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.688701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.688728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.688815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.688843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.688955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.688982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.689069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.689097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.689182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.480 [2024-12-09 10:39:41.689209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.480 qpair failed and we were unable to recover it. 00:29:09.480 [2024-12-09 10:39:41.689343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.689383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.689505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.689533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.689625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.689651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.689739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.689765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.689880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.689907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.689997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.690023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.690116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.690148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.690270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.690297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.690405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.690430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.690515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.690541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.690664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.690690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.690787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.690822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.690916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.690948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.691039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.691065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.691181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.691208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.691326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.691353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.691441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.691468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.691574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.691602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.691701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.691730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.691812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.691849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.691942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.691968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.692093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.692119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.692247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.692275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.692359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.692386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.692487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.692513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.692635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.692661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.692772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.692798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.692934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.692960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.693070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.693096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.693186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.693213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.693298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.693324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.693407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.693434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.693545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.693571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.693658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.693684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.693809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.693836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.693918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.693943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.694030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.694056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.481 [2024-12-09 10:39:41.694172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.481 [2024-12-09 10:39:41.694199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.481 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.694271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.694302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.694393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.694419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.694541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.694567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.694656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.694682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.694793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.694819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.694913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.694939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.695050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.695076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.695204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.695232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.695350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.695376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.695485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.695513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.695626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.695652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.695760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.695786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.695896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.695922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.696001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.696027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.696114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.696146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.696235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.696261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.696351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.696377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.696457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.696483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.696589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.696616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.696739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.696778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.696908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.696936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.697017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.697044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.697166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.697193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.697276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.697302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.697391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.697417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.697506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.697534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.697648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.697675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.697766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.697800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.697886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.697914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.698075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.698109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.698256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.698295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.698395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.698424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.698542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.698568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.698691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.698718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.698832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.698859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.698992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.699032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.699158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.699187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.482 [2024-12-09 10:39:41.699297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.482 [2024-12-09 10:39:41.699324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.482 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.699413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.699439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.699532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.699558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.699699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.699725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.699818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.699845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.699957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.699984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.700081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.700110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.700218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.700245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.700364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.700392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.700489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.700517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.700633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.700661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.700760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.700788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.700909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.700937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.701062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.701102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.701244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.701273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.701359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.701386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.701496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.701523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.701652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.701685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.701773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.701799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.701883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.701922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.702022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.702049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.702153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.702182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.702302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.702329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.702474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.702501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.702596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.702623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.702719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.702746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.702843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.702871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.703040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.703082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.703235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.703275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.703370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.703398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.703502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.703529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.703625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.703652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.703795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.703822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.703952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.703992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.704079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.704107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.704208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.704235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.704322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.704349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.704463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.704490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.483 qpair failed and we were unable to recover it. 00:29:09.483 [2024-12-09 10:39:41.704584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.483 [2024-12-09 10:39:41.704610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.704689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.704716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.704833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.704862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.704970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.705011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.705145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.705174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.705266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.705293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.705411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.705440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.705539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.705565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.705681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.705710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.705799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.705825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.705972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.705998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.706086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.706113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.706207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.706235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.706321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.706348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.706439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.706466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.706547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.706574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.706683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.706710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.706796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.706822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.706933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.706960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.707080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.707120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.707260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.707288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.707403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.707430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.707513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.707541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.707680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.707706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.707862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.707902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.708003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.708038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.708166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.708195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.708315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.708342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.708431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.708457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.708538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.708565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.708653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.708678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.708791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.708817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.708905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.708932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.709037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.709064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.709152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.709180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.709294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.709321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.709427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.709453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.709543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.484 [2024-12-09 10:39:41.709569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.484 qpair failed and we were unable to recover it. 00:29:09.484 [2024-12-09 10:39:41.709654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.709681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.709765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.709791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.709925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.709965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.710064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.710092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.710182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.710209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.710294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.710321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.710405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.710439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.710535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.710562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.710661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.710692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.710837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.710864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.710950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.710976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.711089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.711116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.711217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.711245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.711331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.711358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.711472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.711508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.711647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.711674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.711812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.711838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.711954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.711983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.712081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.712109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.712367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.712394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.712541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.712568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.712706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.712732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.712859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.712885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.713000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.713026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.713117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.713151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.713237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.713264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.713377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.713403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.713496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.713522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.713632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.713658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.713777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.713802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.713911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.713952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.714051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.714079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.714209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.714238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.485 [2024-12-09 10:39:41.714330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.485 [2024-12-09 10:39:41.714357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.485 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.714471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.714498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.714649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.714675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.714772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.714799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.714895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.714934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.715063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.715093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.715226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.715254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.715343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.715370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.715496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.715522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.715631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.715657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.715770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.715796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.715895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.715934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.716059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.716087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.716183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.716211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.716303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.716329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.716436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.716468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.716588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.716614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.716695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.716731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.716844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.716874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.716993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.717020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.717135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.717169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.717284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.717312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.717396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.717425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.717540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.717567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.717672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.717699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.717816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.717844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.717965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.717993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.718078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.718105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.718194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.718221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.718310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.718337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.718412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.718439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.718551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.718577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.718716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.718742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.718859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.718886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.719005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.719034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.719189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.719217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.719334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.719361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.719453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.719480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.486 [2024-12-09 10:39:41.719559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.486 [2024-12-09 10:39:41.719585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.486 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.719708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.719734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.719821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.719847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.719970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.719998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.720117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.720157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.720247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.720274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.720401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.720428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.720517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.720543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.720680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.720706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.720793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.720819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.720908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.720935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.721078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.721117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.721246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.721274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.721393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.721419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.721535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.721561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.721644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.721670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.721755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.721782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.721862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.721895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.721993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.722033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.722153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.722182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.722298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.722325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.722464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.722491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.722572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.722599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.722744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.722772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.722896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.722922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.723033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.723060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.723154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.723182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.723313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.723339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.723452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.723480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.723600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.723627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.723742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.723768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.723888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.723915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.724026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.724054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.724155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.724183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.724285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.724325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.724419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.724448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.724535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.724562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.724676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.724703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.487 qpair failed and we were unable to recover it. 00:29:09.487 [2024-12-09 10:39:41.724852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.487 [2024-12-09 10:39:41.724879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.724995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.725022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.725115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.725147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.725233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.725261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.725357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.725384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.725505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.725532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.725642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.725669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.725750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.725777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.725873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.725899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.726012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.726039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.726125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.726158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.726245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.726271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.726364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.726390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.726475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.726505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.726593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.726620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.726714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.726753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.726903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.726931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.727044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.727071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.727183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.727221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.727359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.727386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.727505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.727538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.727683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.727709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.727820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.727847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.727988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.728017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.728134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.728171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.728265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.728292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.728384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.728412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.728539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.728566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.728668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.728696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.728836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.728863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.728988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.729027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.729162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.729191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.729279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.729306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.729392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.729418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.729509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.729537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.729674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.729701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.729816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.729844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.729925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.488 [2024-12-09 10:39:41.729952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.488 qpair failed and we were unable to recover it. 00:29:09.488 [2024-12-09 10:39:41.730037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.730066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.730151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.730179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.730318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.730345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.730425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.730461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.730536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.730563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.730674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.730700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.730802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.730830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.730948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.730977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.731088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.731121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.731233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.731261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.731372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.731399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.731523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.731549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.731661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.731688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.731802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.731828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.731937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.731964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.732083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.732123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.732282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.732311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.732399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.732426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.732514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.732541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.732626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.732653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.732736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.732764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.732853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.732881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.732972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.732999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.733158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.733186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.733271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.733299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.733386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.733413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.733549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.733576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.733695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.733721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.733833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.733861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.733953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.733980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.734103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.734132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.734235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.734262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.734347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.489 [2024-12-09 10:39:41.734375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.489 qpair failed and we were unable to recover it. 00:29:09.489 [2024-12-09 10:39:41.734467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.734495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.734636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.734663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.734781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.734812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.734928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.734956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.735088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.735133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.735276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.735304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.735414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.735453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.735557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.735584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.735667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.735695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.735799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.735826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.735920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.735948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.736052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.736093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.736238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.736267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.736362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.736389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.736470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.736497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.736612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.736643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.736773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.736800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.736881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.736908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.737007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.737048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.737171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.737202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.737347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.737374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.737513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.737540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.737650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.737676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.737785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.737812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.737922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.737956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.738054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.738094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.738216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.738267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.738388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.738417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.738560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.738587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.738679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.738707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.738788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.738817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.738915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.738942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.739037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.739067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.739191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.739219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.739296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.739323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.739407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.739434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.739569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.739596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.739708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.490 [2024-12-09 10:39:41.739737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.490 qpair failed and we were unable to recover it. 00:29:09.490 [2024-12-09 10:39:41.739855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.739883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.739997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.740023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.740097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.740127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.740221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.740247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.740395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.740425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.740518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.740546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.740684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.740711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.740847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.740874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.740994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.741023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.741143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.741170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.741282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.741309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.741393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.741419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.741572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.741599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.741692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.741720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.741837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.741864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.741971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.741997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.742070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.742096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.742211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.742238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.742254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:09.491 [2024-12-09 10:39:41.742325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.742350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.742429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.742457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.742552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.742582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.742661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.742688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.742796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.742822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.742952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.742994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.743116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.743156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.743297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.743324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.743418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.743457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.743587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.743614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.743692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.743718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.743808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.743835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.743979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.744018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.744177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.744205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.744321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.744360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.744443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.744470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.744557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.744583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.744682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.744709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.744799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.744828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.744990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.491 [2024-12-09 10:39:41.745030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.491 qpair failed and we were unable to recover it. 00:29:09.491 [2024-12-09 10:39:41.745121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.745160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.745257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.745284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.745372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.745398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.745542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.745576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.745663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.745691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.745831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.745870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.745971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.746011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.746136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.746172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.746294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.746322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.746436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.746463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.746584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.746611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.746721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.746748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.746867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.746900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.746988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.747015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.747116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.747174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.747277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.747307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.747428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.747459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.747553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.747580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.747696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.747724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.747839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.747867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.747988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.748016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.748103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.748131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.748259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.748286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.748376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.748402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.748483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.748510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.748591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.748617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.748731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.748758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.748851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.748892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.748997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.749038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.749155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.749185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.749274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.749303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.749390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.749418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.749502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.749530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.749623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.749651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.749755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.749781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.749868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.749898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.750023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.750051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.750147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.492 [2024-12-09 10:39:41.750179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.492 qpair failed and we were unable to recover it. 00:29:09.492 [2024-12-09 10:39:41.750270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.750298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.750383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.750409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.750518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.750544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.750620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.750647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.750784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.750810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.750888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.750915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.750998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.751026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.751172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.751202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.751304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.751336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.751427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.751466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.751549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.751575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.751671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.751701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.751789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.751827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.751935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.751963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.752083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.752110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.752211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.752238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.752330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.752356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.752463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.752490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.752604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.752641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.752733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.752761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.752855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.752881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.752965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.752992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.753087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.753114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.753206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.753235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.753362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.753389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.753481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.753509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.753618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.753656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.753753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.753780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.753895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.753922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.754010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.754037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.754127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.754163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.754275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.754301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.754406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.754432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.754552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.754579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.754660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.754689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.754789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.754818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.754933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.754960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.755099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.493 [2024-12-09 10:39:41.755137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.493 qpair failed and we were unable to recover it. 00:29:09.493 [2024-12-09 10:39:41.755272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.755299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.755384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.755410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.755518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.755545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.755637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.755666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.755785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.755813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.755959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.755985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.756074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.756100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.756235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.756264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.756358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.756384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.756479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.756506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.756628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.756660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.756780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.756807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.756944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.756981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.757066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.757093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.757212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.757252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.757347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.757376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.757524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.757550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.757632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.757659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.757801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.757827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.757912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.757938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.758025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.758052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.758165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.758193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.758291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.758317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.758397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.758424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.758572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.758606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.758740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.758779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.758899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.758928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.759059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.759098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.759224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.759255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.759381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.759410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.759503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.759541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.759661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.759689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.759779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.759810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.494 [2024-12-09 10:39:41.759927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.494 [2024-12-09 10:39:41.759954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.494 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.760067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.760095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.760210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.760238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.760337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.760365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.760515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.760546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.760639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.760666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.760762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.760788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.760906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.760935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.761055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.761082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.761202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.761230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.761348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.761376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.761514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.761541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.761645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.761673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.761796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.761824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.761960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.762000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.762096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.762125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.762229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.762257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.762378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.762405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.762495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.762523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.762613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.762642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.762767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.762795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.762886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.762915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.762997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.763024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.763116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.763158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.763278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.763306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.763418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.763453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.763538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.763566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.763686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.763714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.763832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.763858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.763970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.763996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.764119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.764152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.764278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.764307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.764425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.764454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.764554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.764583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.764727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.764755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.764840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.764869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.765001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.765040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.765170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.765199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.495 [2024-12-09 10:39:41.765294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.495 [2024-12-09 10:39:41.765321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.495 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.765410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.765437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.765518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.765546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.765657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.765684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.765798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.765825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.765920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.765950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.766046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.766080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.766202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.766240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.766353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.766380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.766465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.766492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.766584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.766610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.766750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.766777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.766891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.766918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.767025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.767052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.767179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.767207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.767290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.767318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.767404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.767433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.767555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.767582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.767682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.767721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.767848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.767876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.767970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.767997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.768088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.768114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.768204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.768231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.768326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.768358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.768515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.768542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.768655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.768682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.768768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.768795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.768876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.768903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.769051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.769077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.769168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.769196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.769315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.769345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.769443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.769483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.769581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.769609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.769689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.769722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.769835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.769863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.769990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.770030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.770126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.770161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.770275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.496 [2024-12-09 10:39:41.770301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.496 qpair failed and we were unable to recover it. 00:29:09.496 [2024-12-09 10:39:41.770423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.770450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.770528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.770554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.770671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.770699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.770802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.770831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.770949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.770976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.771059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.771087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.771179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.771207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.771304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.771343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.771443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.771472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.771569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.771596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.771687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.771713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.771839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.771879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.771977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.772005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.772094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.772121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.772241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.772268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.772358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.772385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.772501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.772527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.772641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.772668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.772788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.772819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.772928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.772969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.773061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.773091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.773208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.773236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.773330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.773362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.773463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.773490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.773603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.773629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.773715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.773741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.773827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.773853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.773964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.773992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.774082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.774112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.774242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.774273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.774363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.774391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.774501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.774528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.774644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.774671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.774754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.774781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.774865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.774893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.775005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.775034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.775135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.775171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.775286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.497 [2024-12-09 10:39:41.775314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.497 qpair failed and we were unable to recover it. 00:29:09.497 [2024-12-09 10:39:41.775427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.775465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.775557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.775596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.775712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.775739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.775876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.775911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.776022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.776049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.776175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.776216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.776337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.776366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.776450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.776477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.776590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.776617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.776716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.776744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.776835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.776862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.776944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.776981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.777120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.777152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.777248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.777275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.777363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.777390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.777471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.777497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.777636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.777662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.777761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.777790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.777912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.777939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.778035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.778061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.778180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.778207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.778294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.778321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.778457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.778484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.778600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.778627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.778720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.778751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.778863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.778889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.778998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.779025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.779121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.779173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.779288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.779329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.779422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.779451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.779544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.779571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.779689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.498 [2024-12-09 10:39:41.779719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.498 qpair failed and we were unable to recover it. 00:29:09.498 [2024-12-09 10:39:41.779861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.779889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.780003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.780030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.780163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.780190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.780273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.780299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.780383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.780415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.780551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.780577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.780698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.780725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.780805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.780832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.780912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.780939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.781031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.781060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.781178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.781207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.781299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.781325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.781423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.781450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.781538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.781565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.781674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.781714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.781840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.781868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.781986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.782013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.782109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.782135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.782221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.782248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.782381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.782432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.782533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.782561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.782676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.782703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.782794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.782821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.782943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.782970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.783088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.783114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.783236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.783265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.783389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.783419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.783521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.783548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.783669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.783696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.783813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.783840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.783951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.783979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.784062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.784089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.784181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.784213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.784330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.784357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.784440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.784466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.784580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.784607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.784693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.784720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.499 [2024-12-09 10:39:41.784811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.499 [2024-12-09 10:39:41.784840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.499 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.784976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.785002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.785095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.785122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.785274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.785301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.785392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.785418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.785527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.785553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.785669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.785696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.785785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.785815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.785930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.785957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.786043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.786070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.786215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.786243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.786329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.786356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.786438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.786465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.786540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.786567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.786682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.786708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.786848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.786876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.786989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.787015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.787115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.787172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.787273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.787301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.787425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.787463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.787547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.787574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.787682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.787709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.787824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.787853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.787963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.787991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.788125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.788176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.788274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.788302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.788385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.788414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.788505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.788532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.788609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.788636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.788727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.788753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.788881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.788923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.789047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.789076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.789168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.789197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.789347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.789374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.789452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.789479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.789571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.789605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.789747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.789775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.789888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.789916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.500 qpair failed and we were unable to recover it. 00:29:09.500 [2024-12-09 10:39:41.790021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.500 [2024-12-09 10:39:41.790061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.790155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.790184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.790282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.790311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.790427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.790453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.790533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.790562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.790650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.790677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.790774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.790803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.790886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.790913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.791023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.791058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.791190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.791218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.791304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.791331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.791466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.791494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.791609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.791636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.791720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.791747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.791892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.791919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.792012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.792041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.792185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.792213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.792302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.792329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.792412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.792439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.792544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.792570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.792680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.792706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.792818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.792846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.792967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.792997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.793097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.793137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.793241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.793275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.793416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.793443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.793533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.793559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.793698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.793725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.793841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.793869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.793977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.794006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.794092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.794120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.794248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.794276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.794416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.794442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.794527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.794554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.794675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.794703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.794815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.794842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.794943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.794983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.795115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.501 [2024-12-09 10:39:41.795162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.501 qpair failed and we were unable to recover it. 00:29:09.501 [2024-12-09 10:39:41.795263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.795291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.795402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.795428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.795551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.795578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.795718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.795744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.795859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.795886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.796031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.796061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.796185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.796226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.796341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.796369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.796453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.796480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.796590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.796617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.796720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.796747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.796892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.796919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.797059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.797085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.797180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.797211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.797327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.797354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.797462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.797488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.797603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.797629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.797741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.797768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.797891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.797920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.798034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.798062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.798174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.798202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.798284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.798311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.798429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.798457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.798541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.798569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.798652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.798680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.798769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.798797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.798911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.798944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.799036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.799062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.799175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.799202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.799292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.799318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.799427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.799463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.799548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.799575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.799670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.799711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.799808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.799836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.799951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.799979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.800069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.800097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.800201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.800231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.800310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.502 [2024-12-09 10:39:41.800337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.502 qpair failed and we were unable to recover it. 00:29:09.502 [2024-12-09 10:39:41.800433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.800460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.800544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.800571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.800657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.800683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.800766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.800792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.800905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.800931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.801070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.801097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.801189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.801216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.801332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.801358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.801471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.801499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.801613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.801639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.801760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.801786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.801871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.801900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.801994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.802024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.802115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.802147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.802290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.802316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.802428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.802460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.802578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.802610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.802732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.802759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.802879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.802908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.803019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.803059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.803172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.803201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.803296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.803323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.803416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.803452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.803563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.803590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.803704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.803730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.803819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.803848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.803931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.803959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.804050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.804078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.804158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.804185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.804278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.804306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.804386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.804413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.804493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.804522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.804602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.503 [2024-12-09 10:39:41.804630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.503 qpair failed and we were unable to recover it. 00:29:09.503 [2024-12-09 10:39:41.804719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.804748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.804856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.804883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.504 [2024-12-09 10:39:41.804884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.804914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.504 [2024-12-09 10:39:41.804928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.504 [2024-12-09 10:39:41.804941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.504 [2024-12-09 10:39:41.804951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.504 [2024-12-09 10:39:41.804995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.805021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.805112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.805146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.805263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.805290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.805374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.805401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.805477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.805503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.805618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.805646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.805736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.805763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.805857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.805883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.805998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.806025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.806108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.806135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.806232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.806259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.806371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.806397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.806514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.806570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.806547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:09.504 [2024-12-09 10:39:41.806602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:09.504 [2024-12-09 10:39:41.806772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.806800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.504 [2024-12-09 10:39:41.806677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.806686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:09.504 [2024-12-09 10:39:41.806887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.806914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.807010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.807039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.807136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.807171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.807283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.807315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.807409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.807447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.807534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.807561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.807648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.807677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.807774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.807801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.807894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.807921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.807998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.808025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.808112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.808144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.808260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.808286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.808378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.808405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.808522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.808548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.808628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.808655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.808743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.808770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.808874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.808914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.809011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.809040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.809132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.504 [2024-12-09 10:39:41.809177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.504 qpair failed and we were unable to recover it. 00:29:09.504 [2024-12-09 10:39:41.809319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.809345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.809425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.809451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.809528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.809554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.809643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.809669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.809776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.809802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.809888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.809918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.810013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.810040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.810136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.810189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.810286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.810317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.810401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.810429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.810543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.810570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.810658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.810687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.810776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.810805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.810893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.810921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.811037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.811063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.811149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.811176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.811261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.811288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.811405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.811432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.811543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.811571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.811649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.811676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.811796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.811823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.811926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.811952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.812040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.812066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.812180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.812210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.812326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.812359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.812495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.812535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.812654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.812683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.812764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.812791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.812882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.812909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.813034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.813062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.813154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.813184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.813268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.813296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.813390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.813417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.813499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.813526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.813609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.813636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.813709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.813735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.813813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.813840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.505 qpair failed and we were unable to recover it. 00:29:09.505 [2024-12-09 10:39:41.813956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.505 [2024-12-09 10:39:41.813985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.814077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.814105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.814203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.814231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.814312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.814338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.814417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.814444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.814539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.814566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.814681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.814708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.814819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.814845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.814926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.814953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.815066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.815093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.815175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.815202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.815292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.815319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.815433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.815459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.815537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.815563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.815650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.815682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.815764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.815791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.815899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.815925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.816023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.816063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.816163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.816193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.816278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.816305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.816386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.816412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.816495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.816523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.816610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.816638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.816750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.816777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.816886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.816913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.816996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.817023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.817101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.817145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.817251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.817292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.817385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.817414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.817516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.817544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.817627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.817654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.817745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.817772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.817858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.817885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.817989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.818016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.818124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.818162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.818244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.818271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.818383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.818409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.818539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.818569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.818652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.818680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.818770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.818797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.818914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.818941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.506 qpair failed and we were unable to recover it. 00:29:09.506 [2024-12-09 10:39:41.819024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.506 [2024-12-09 10:39:41.819051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.819131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.819165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.819256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.819285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.819370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.819397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.819504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.819530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.819616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.819643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.819728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.819755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.819873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.819902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.819989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.820017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.820126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.820158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.820253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.820280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.820360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.820387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.820497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.820524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.820611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.820643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.820723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.820750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.820828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.820855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.820963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.820989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.821096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.821123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.821223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.821249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.821330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.821358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.821494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.821533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.821623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.821651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.821796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.821823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.821926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.821953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.822030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.822057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.822158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.822185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.822304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.822330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.822429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.822469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.822554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.822582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.822673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.822701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.822790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.822817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.822959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.822986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.823082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.823109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.823234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.823262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.823339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.823365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.823476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.823503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.823577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.823604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.507 qpair failed and we were unable to recover it. 00:29:09.507 [2024-12-09 10:39:41.823687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.507 [2024-12-09 10:39:41.823713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.823795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.823825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.823913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.823942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.824070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.824110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.824210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.824238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.824324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.824352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.824436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.824463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.824579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.824606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.824722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.824749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.824854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.824881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.825006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.825046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.825170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.825199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.825287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.825314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.825407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.825445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.825529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.825557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.825673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.825702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.825788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.825821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.825919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.825949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.826038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.826066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.826164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.826192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.826267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.826294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.826377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.826405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.826534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.826562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.826646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.826674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.826757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.826784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.826894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.826934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.827026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.827056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.827146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.827175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.827257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.508 [2024-12-09 10:39:41.827284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.508 qpair failed and we were unable to recover it. 00:29:09.508 [2024-12-09 10:39:41.827364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.827391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.827525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.827552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.827636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.827662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.827743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.827770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.827875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.827901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.827995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.828024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.828134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.828167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.828254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.828280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.828365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.828392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.828471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.828498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.828600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.828627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.828702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.828730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.828810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.828837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.828917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.828947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.829058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.829091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.829180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.829207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.829288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.829314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.829406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.829433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.829513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.829540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.829622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.829648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.829726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.829752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.829861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.829888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.830002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.830029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.830111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.830152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.830242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.830269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.830348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.830375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.830494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.830521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.830609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.830636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.830726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.830753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.830839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.830867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.830957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.830986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.831069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.831097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.831191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.831219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.831300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.831328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.831412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.831440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.831529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.831558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.831640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.831666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.509 [2024-12-09 10:39:41.831748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.509 [2024-12-09 10:39:41.831776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.509 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.831886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.831913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.831992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.832020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.832108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.832137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.832273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.832302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.832385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.832412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.832493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.832521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.832610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.832637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.832752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.832781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.832895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.832922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.833007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.833034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.833115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.833156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.833231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.833258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.833349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.833375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.833453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.833480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.833563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.833590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.833668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.833695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.833773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.833806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.833906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.833934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.834024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.834063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.834169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.834198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.834282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.834308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.834400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.834427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.834504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.834530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.834620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.834647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.834761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.834789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.834902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.834932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.835049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.835076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.835188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.835216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.835303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.835330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.835423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.835451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.835549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.835577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.835669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.835696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.835808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.835835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.835923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.835952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.836058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.836086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.836202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.836230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.836314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.836341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.510 [2024-12-09 10:39:41.836431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.510 [2024-12-09 10:39:41.836458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.510 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.836578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.836605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.836682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.836709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.836826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.836852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.836941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.836970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.837051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.837078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.837204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.837244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.837337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.837366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.837481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.837509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.837589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.837615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.837694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.837722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.837849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.837879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.837989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.838019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.838103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.838147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.838231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.838258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.838352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.838379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.838500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.838527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.838615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.838643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.838726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.838755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.838835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.838869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.838958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.838986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.839076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.839103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.839199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.839226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.839307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.839334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.839417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.839443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.839530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.839557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.839676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.839704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.839813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.839842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.839936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.839977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.840059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.840088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.840167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.840195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.840281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.840309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.840405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.840433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.840517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.840544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.840620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.840646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.840759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.840785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.840867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.840894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.840993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.841034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.841130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.511 [2024-12-09 10:39:41.841165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.511 qpair failed and we were unable to recover it. 00:29:09.511 [2024-12-09 10:39:41.841290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.841318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.841407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.841434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.841515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.841542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.841627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.841655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.841767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.841794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.841919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.841948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.842036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.842063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.842158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.842192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.842271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.842297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.842407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.842433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.842523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.842550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.842666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.842694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.842772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.842798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.842887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.842916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.843000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.843028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.843116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.843152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.843265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.843293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.843389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.843416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.843525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.843552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.843639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.843667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.843749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.843777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.843894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.843921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.844031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.844057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.844158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.844185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.844276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.844316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.844411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.844439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.844523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.844550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.844661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.844687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.844774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.844801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.844880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.844907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.845003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.845030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.845150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.845177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.512 [2024-12-09 10:39:41.845259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.512 [2024-12-09 10:39:41.845285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.512 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.845363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.845390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.845539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.845571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.845687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.845714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.845801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.845828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.845914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.845942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.846028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.846057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.846168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.846196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.846315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.846343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.846433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.846465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.846586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.846613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.846698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.846725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.846803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.846829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.846911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.846937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.847027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.847055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.847146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.847182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.847275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.847302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.847409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.847435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.847549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.847577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.847670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.847710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.847824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.847851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.847937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.847965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.848042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.848069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.848183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.848211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.848300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.848327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.848442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.848469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.848556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.848582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.848665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.848691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.848771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.848798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.848889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.848916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.848999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.849027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.849111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.849137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.849251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.849278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.849390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.849415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.849496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.849526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.849654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.849686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.849775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.849802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.849907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.849934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.850014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.513 [2024-12-09 10:39:41.850041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.513 qpair failed and we were unable to recover it. 00:29:09.513 [2024-12-09 10:39:41.850160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.850187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.850301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.850328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.850448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.850475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.850558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.850593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.850674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.850702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.850787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.850815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.850890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.850916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.851031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.851058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.851153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.851180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.851265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.851291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.851373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.851399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.851487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.851516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.851636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.851664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.851747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.851776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.851858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.851885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.851984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.852011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.852094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.852122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.852239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.852266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.852348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.852375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.852464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.852491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.852633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.852659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.852747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.852773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.852881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.852907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.852985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.853014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.853124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.853159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.853275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.853302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.853391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.853421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.853509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.853538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.853634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.853674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.853778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.853806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.853923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.853951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.854034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.854061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.854165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.854194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.854309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.854338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.854412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.854439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.854558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.854586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.854677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.854705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.514 qpair failed and we were unable to recover it. 00:29:09.514 [2024-12-09 10:39:41.854782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.514 [2024-12-09 10:39:41.854811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.854927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.854958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.855045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.855073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.855187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.855214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.855305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.855332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.855408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.855435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.855519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.855545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.855634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.855662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.855777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.855806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.855895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.855923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.856010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.856037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.856116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.856157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.856271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.856298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.856383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.856409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.856489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.856516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.856593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.856620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.856742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.856770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.856857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.856885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.856968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.856997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.857078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.857105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.857202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.857229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.857314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.857341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.857424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.857451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.857556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.857582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.857672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.857700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.857807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.857835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.857927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.857955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.858074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.858101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.858202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.858229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.858310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.858336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.858415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.858441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.858531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.858557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.858639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.858666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.858744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.858775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.858892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.858920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.859010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.859037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.859123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.859158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.859248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.859275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.515 [2024-12-09 10:39:41.859362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.515 [2024-12-09 10:39:41.859391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.515 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.859478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.859505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.859585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.859612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.859697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.859724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.859817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.859845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.859936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.859964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.860049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.860077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.860169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.860197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.860303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.860330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.860417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.860444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.860527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.860553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.860639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.860665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.860769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.860796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.860878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.860905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.860991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.861019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.861101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.861130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.861231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.861259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.861342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.861369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.861483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.861510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.861589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.861616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.861699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.861727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.861809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.861837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.861920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.861952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.862026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.862053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.862133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.862168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.862248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.862275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.862354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.862380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.862477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.862504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.862590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.862617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.862697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.862724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.862831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.862857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.862941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.862969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.863051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.863078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.863164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.863193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.863279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.863308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.863389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.863416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.863507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.863534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.863648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.863675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.863773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.516 [2024-12-09 10:39:41.863800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.516 qpair failed and we were unable to recover it. 00:29:09.516 [2024-12-09 10:39:41.863881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.863908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.863998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.864025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.864118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.864155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.864248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.864276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.864359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.864386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.864472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.864498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.864584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.864611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.864690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.864716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.864792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.864819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.864918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.864959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.865065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.865106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.865200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.865228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.865318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.865345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.865436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.865463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.865556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.865582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.865669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.865695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.865775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.865805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.865891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.865919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.866002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.866028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.866145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.866173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.866255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.866283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.866391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.866418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.866540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.866567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.866645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.866677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.866762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.866791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.866874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.866901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.867004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.867044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.867146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.867175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.867291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.867319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.867417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.867443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.867536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.867563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.867646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.867673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.867765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.867791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.517 [2024-12-09 10:39:41.867874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.517 [2024-12-09 10:39:41.867901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.517 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.867984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.868012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.868112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.868146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.868241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.868269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.868362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.868389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.868479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.868506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.868590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.868617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.868701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.868728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.868814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.868843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.868923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.868949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.869025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.869051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.869126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.869159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.869248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.869279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.869392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.869420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.869507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.869535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.869615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.869642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.869726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.869752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.869863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.869895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.869982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.870009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.870117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.870150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.870229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.870255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.870362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.870389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.870476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.870503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.870588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.870614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.870701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.870729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.870813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.870844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.870946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.870987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.871087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.871118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.871215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.871244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.871323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.871350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.871434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.871462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.871552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.871579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.871654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.871680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.871795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.871828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.871947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.871973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.872051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.872077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.872168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.872196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.872276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.872302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.518 [2024-12-09 10:39:41.872385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.518 [2024-12-09 10:39:41.872412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.518 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.872493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.872519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.872632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.872659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.872735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.872762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.872840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.872868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.872982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.873011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.873098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.873132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.873242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.873270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.873386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.873414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.873501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.873528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.873612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.873639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.873730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.873758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.873850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.873876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.873963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.873990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.874066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.874093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.874181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.874207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.874296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.874322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.874400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.874426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.874520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.874547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.874637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.874666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.874762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.874791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.874913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.874939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.875021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.875048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.875145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.875172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.875253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.875279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.875363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.875390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.875474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.875501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.875579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.875606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.875691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.875719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.875796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.875824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.875923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.875950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.876042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.876069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.876150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.876178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.876279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.876319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.876412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.876440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.876555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.876583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.876661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.876689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.876784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.876816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.519 [2024-12-09 10:39:41.876899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.519 [2024-12-09 10:39:41.876930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.519 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.877019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.877047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.877131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.877165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.877248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.877275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.877362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.877391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.877476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.877503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.877582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.877609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.877691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.877718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.877804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.877839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.877929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.877955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.878042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.878069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.878180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.878208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.878305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.878336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.878435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.878462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.878542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.878569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.878656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.878683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.878763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.878790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.878870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.878898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.878988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.520 [2024-12-09 10:39:41.879016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.520 qpair failed and we were unable to recover it. 00:29:09.520 [2024-12-09 10:39:41.879106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.879133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.879227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.879255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.879347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.879375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.879469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.879496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.879578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.879605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.879688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.879716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.879807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.879838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.879919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.879947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.880048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.880074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.880165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.880193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.880279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.880306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.880393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.880419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.880513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.880541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.880630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.880657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.880752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.880779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.880859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.880886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.880983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.881025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.790 [2024-12-09 10:39:41.881112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.790 [2024-12-09 10:39:41.881158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.790 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.881250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.881277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.881365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.881391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.881483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.881509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.881592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.881619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.881704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.881732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.881828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.881855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.881940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.881967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.882050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.882077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.882165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.882192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.882280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.882307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.882392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.882419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.882506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.882537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.882623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.882650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.882747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.882775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.882861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.882891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.882975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.883002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.883101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.883128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.883229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.883257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.883342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.883369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.883452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.883478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.883569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.883595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.883682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.883712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.883808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.883836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.883918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.883946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.884036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.884062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.884153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.884180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.884261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.884287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.884377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.884404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.884490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.884518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.884627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.884654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.884739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.884768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.884855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.884884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.884973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.885001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.885078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.885105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.885238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.885267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.885355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.885381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.885503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.885529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.791 qpair failed and we were unable to recover it. 00:29:09.791 [2024-12-09 10:39:41.885611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.791 [2024-12-09 10:39:41.885638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.885745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.885777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.885858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.885886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.885970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.885998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.886084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.886114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.886215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.886243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.886328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.886354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.886428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.886454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.886540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.886567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.886657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.886683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.886774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.886802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.886893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.886921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.887000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.887028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.887123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.887173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.887264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.887292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.887380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.887407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.887490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.887518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.887629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.887656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.887738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.887765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.887843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.887869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.887951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.887981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.888073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.888102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.888194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.888223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.888335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.888361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.888446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.888473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.888552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.888578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.888695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.888723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.888811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.888838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.888925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.888954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.889043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.889070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.889165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.889192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.889278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.889305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.889394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.889421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.889509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.889536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.889615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.889643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.889721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.889748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.889869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.889896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.889978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.890005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.792 [2024-12-09 10:39:41.890088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.792 [2024-12-09 10:39:41.890115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.792 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.890213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.890241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.890344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.890372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.890456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.890488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.890572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.890599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.890715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.890743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.890847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.890889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.890980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.891008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.891091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.891118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.891213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.891241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.891328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.891355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.891442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.891469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.891563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.891590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.891676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.891703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.891783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.891810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.891887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.891914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.891990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.892017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.892110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.892144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.892229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.892255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.892339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.892366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.892453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.892481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.892561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.892587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.892662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.892689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.892789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.892818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.892918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.892945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.893022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.893049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.893132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.893176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.893254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.893281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.893359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.893386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.893503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.893531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.893620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.893652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.893757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.893797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.893911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.893938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.894060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.894087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.894181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.894209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.894299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.894325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.894407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.894434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.894515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.894543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.793 [2024-12-09 10:39:41.894627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.793 [2024-12-09 10:39:41.894655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.793 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.894733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.894760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.894846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.894873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.894958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.894986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.895081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.895120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.895215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.895242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.895331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.895358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.895450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.895477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.895551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.895577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.895671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.895703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.895788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.895816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.895898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.895925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.896002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.896029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.896123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.896156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.896249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.896276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.896358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.896385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.896465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.896491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.896570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.896597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.896691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.896718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.896805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.896835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.896918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.896946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.897037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.897065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.897154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.897184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.897297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.897324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.897430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.897456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.897537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.897565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.897655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.897683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.897764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.897793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.897876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.897904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.897996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.898023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.898117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.898152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.898244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.898271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.898357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.898390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.898507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.898535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.898619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.898647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.898755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.898795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.898878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.898906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.899036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.899076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.794 [2024-12-09 10:39:41.899173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.794 [2024-12-09 10:39:41.899202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.794 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.899282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.899309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.899388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.899415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.899492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.899518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.899628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.899654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.899739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.899767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.899852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.899880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.899964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.899991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.900075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.900102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.900191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.900218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.900332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.900359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.900440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.900467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.900553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.900579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.900667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.900698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.900783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.900810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.900893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.900921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.901000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.901027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.901109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.901136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.901241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.901268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.901357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.901384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.901475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.901503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.901582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.901619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.901700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.901727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.901837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.901863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.901956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.901985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.902067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.902096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.902194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.902221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.902305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.902332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.902412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.902439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.902525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.902551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.902636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.902664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.902752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.902779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.795 [2024-12-09 10:39:41.902868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.795 [2024-12-09 10:39:41.902897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.795 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.902981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.903008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.903113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.903160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.903288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.903317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.903400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.903427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.903520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.903546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.903633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.903661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.903753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.903780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.903862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.903890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.903981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.904008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.904094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.904121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.904214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.904242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.904325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.904351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.904442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.904471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.904554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.904582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.904669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.904696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.904777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.904805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.904932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.904959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.905048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.905076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.905157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.905184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.905266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.905293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.905368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.905395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.905542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.905570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.905651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.905678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.905785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.905810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.905895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.905923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.906006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.906035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.906124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.906158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.906243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.906270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.906353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.906384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.906478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.906505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.906615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.906643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.906732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.906760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.906851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.906880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.906970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.906996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.907075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.907102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.907191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.907219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.907305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.907331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.796 [2024-12-09 10:39:41.907411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.796 [2024-12-09 10:39:41.907437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.796 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.907518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.907544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.907636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.907663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.907775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.907802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.907881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.907907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.907989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.908016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.908117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.908165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.908253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.908282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.908377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.908406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.908490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.908517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.908606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.908633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.908713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.908739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.908825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.908853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.908943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.908972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.909065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.909091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.909186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.909214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.909298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.909324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.909403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.909429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.909519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.909551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.909640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.909667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.909746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.909772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.909866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.909893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.909975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.910003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.910087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.910114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.910206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.910233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.910311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.910337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.910418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.910444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.910560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.910586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.910668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.910696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.910819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.910846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.910946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.910976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.911061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.911087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.911193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.911221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.911305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.911331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.911421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.911448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.911544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.911570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.911684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.911712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.911809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.911837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.797 [2024-12-09 10:39:41.911951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.797 [2024-12-09 10:39:41.911978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.797 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.912087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.912112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.912206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.912233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.912313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.912339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.912429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.912458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.912544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.912572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.912664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.912703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.912809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.912838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.912919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.912946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.913031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.913057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.913147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.913174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.913259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.913286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.913386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.913413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.913494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.913522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.913612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.913641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.913737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.913766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.913852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.913879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.913968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.913996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.914083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.914110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.914198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.914224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.914323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.914355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.914444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.914471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.914555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.914583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.914669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.914697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.914782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.914811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.914895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.914922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.915003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.915030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.915111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.915150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.915238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.915265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.915349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.915376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.915456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.915483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.915572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.915599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.915690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.915717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.915794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.915822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.915908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.915934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.916009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.916035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.916117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.916150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.916234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.916260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.798 [2024-12-09 10:39:41.916340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.798 [2024-12-09 10:39:41.916366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.798 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.916446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.916474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.916562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.916589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.916673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.916700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.916775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.916802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.916893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.916922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.917029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.917070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.917201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.917230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.917314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.917341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.917433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.917461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.917552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.917580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.917661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.917686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.917764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.917790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.917875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.917901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.917977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.918004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.918094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.918123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.918214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.918243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.918325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.918352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.918430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.918457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.918554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.918582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.918661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.918688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.918772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.918799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.918882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.918916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.919003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.919029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.919117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.919152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.919228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.919254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.919367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.919394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.919501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.919528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.919611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.919637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.919717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.919745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.919833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.919860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.919967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.919993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.920072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.920100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.920196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.799 [2024-12-09 10:39:41.920223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.799 qpair failed and we were unable to recover it. 00:29:09.799 [2024-12-09 10:39:41.920334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.920360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.920435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.920461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.920555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.920582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.920694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.920723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.920803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.920831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.920914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.920941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.921027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.921055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.921145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.921174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.921253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.921280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.921359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.921385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.921481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.921509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.921591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.921619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.921709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.921737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.921815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.921842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.921924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.921950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.922039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.922070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.922158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.922187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.922269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.922295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.922382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.922409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.922489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.922515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.922591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.922618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.922697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.922723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.922808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.922836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.922916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.922943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.923024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.923050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.923126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.923157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.923264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.923291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.923371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.923398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.923475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.923502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.923589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.923616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.923697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.923724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.923800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.923826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.923908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.923934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.924015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.924044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.924132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.924168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.924256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.924282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.924367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.924393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.924484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.800 [2024-12-09 10:39:41.924510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.800 qpair failed and we were unable to recover it. 00:29:09.800 [2024-12-09 10:39:41.924587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.924613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.924719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.924746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.924826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.924852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.924925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.924951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.925071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.925104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.925200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.925229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.925313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.925340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.925419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.925445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.925525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.925552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.925635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.925665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.925749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.925777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.925856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.925885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.925972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.925999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.926083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.926109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.926192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.926219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.926326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.926353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.926433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.926459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.926556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.926582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.926670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.926698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.926776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.926803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.926887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.926914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.926995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.927021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.927107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.927136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.927250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.927278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.927396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.927423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.927501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.927528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.927617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.927644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.927727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.927754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.927836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.927864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.927958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.927997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.928087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.928115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.928221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.928249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.928565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.928592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.928673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.928699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.928814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.928841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.928929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.928957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.929043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.801 [2024-12-09 10:39:41.929070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.801 qpair failed and we were unable to recover it. 00:29:09.801 [2024-12-09 10:39:41.929154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.929182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.802 [2024-12-09 10:39:41.929262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.929288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.929366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:09.802 [2024-12-09 10:39:41.929393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.929476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.929503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.929584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.929613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.802 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.929713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.929753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:09.802 [2024-12-09 10:39:41.929845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.929873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.802 [2024-12-09 10:39:41.929986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.930012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.930096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.930123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.930215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.930242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.930324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.930350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.930427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.930454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.930535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.930561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.930645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.930673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.930762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.930788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.930875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.930904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.930986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.931013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.931093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.931122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.931209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.931236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.931349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.931376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.931466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.931493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.931574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.931600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.931680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.931709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.931795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.931824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.931914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.931942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.932025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.932051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.932133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.932172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.932263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.932289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.932374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.932401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.932486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.932514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.932602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.932630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.932714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.932742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.932830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.932857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.932939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.932966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.933044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.933073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.933187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.802 [2024-12-09 10:39:41.933215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.802 qpair failed and we were unable to recover it. 00:29:09.802 [2024-12-09 10:39:41.933300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.933327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.933411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.933437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.933519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.933545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.933626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.933652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.933728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.933754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.933836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.933862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.933943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.933969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.934046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.934072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.934149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.934176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.934254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.934285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.934369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.934396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.934473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.934501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.934591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.934618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.934700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.934730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.934823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.934850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.934924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.934951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.935030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.935058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.935153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.935181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.935272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.935299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.935385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.935412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.935523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.935550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.935632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.935659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.935744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.935771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.935868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.935896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.935976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.936002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.936082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.936108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.936204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.936231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.936308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.936334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.936418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.936446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.936533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.936560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.936639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.936665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.936746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.936772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.936852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.936879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.936958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.936984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.937069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.937099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.937187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.937217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.937322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.937354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.937438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.803 [2024-12-09 10:39:41.937465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.803 qpair failed and we were unable to recover it. 00:29:09.803 [2024-12-09 10:39:41.937577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.937603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.937687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.937715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.937802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.937830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.937940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.937970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.938073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.938113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.938207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.938235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.938318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.938346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.938459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.938488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.938580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.938607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.938693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.938720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.938805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.938834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.938942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.938968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.939067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.939094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.939181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.939209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.939290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.939317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.939399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.939425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.939513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.939540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.939637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.939668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.939748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.939776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.939856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.939883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.939960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.939986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.940066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.940093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.940183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.940210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.940289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.940315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.940396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.940422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.940502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.940528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.940640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.940668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.940755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.940781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.940873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.940902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.940984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.941012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.941090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.941117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.941216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.941245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.941332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.941361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.804 [2024-12-09 10:39:41.941448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.804 [2024-12-09 10:39:41.941475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.804 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.941586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.941612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.941691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.941717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.941794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.941820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.941904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.941931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.942008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.942039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.942129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.942167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.942256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.942284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.942360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.942387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.942475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.942504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.942585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.942612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.942693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.942721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.942799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.942827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.942911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.942940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.943023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.943051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.943127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.943159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.943244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.943271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.943356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.943381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.943464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.943490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.943608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.943635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.943718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.943746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.943833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.943861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.943942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.943970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.944055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.944082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.944180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.944207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.944286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.944312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.944393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.944419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.944497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.944523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.944606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.944632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.944711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.944738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.944850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.944877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.944968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.944997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.945080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.945113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.945219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.945247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.945332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.945358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.945468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.945494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.945574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.945601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.945687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.945714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.805 [2024-12-09 10:39:41.945821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.805 [2024-12-09 10:39:41.945848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.805 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.945930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.945960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.946039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.946066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.946156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.946185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.946271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.946298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.946382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.946409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.946490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.946517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.946596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.946624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.946719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.946749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.946832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.946860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.946955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.946984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.947063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.947090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.947177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.947205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.947320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.947347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.947426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.947454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.947535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.947561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.947651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.947679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.947767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.947795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.947874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.947902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.948001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.948029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.948118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.948154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.948248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.948275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.948353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.948380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.948465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.948491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.948579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.948606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.948684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.948711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.948786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.948813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.948899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.948926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.949006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.949033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.949112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.949146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.949225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.949252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.949328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.949355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.949443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.949471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.949556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.949586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.949678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.949714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.949808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.949837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.949921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.949949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.950037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.950064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.806 [2024-12-09 10:39:41.950150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.806 [2024-12-09 10:39:41.950177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.806 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.950259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.950285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.950375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.950404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.950487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.950513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.950603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.950641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.950731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.950758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.950842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.950869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.950950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.950976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.951055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.951080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.951172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.951202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.951317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.951344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.951422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.951448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.951546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.807 [2024-12-09 10:39:41.951573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.951659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.951685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:09.807 [2024-12-09 10:39:41.951771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.951798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.951886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.807 [2024-12-09 10:39:41.951916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.952007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.952036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b9 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.807 0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.952129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.952170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.952248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.952274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.952354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.952380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.952459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.952485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.952571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.952600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.952693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.952720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.952801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.952828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.952921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.952950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.953038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.953066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.953151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.953179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.953256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.953283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.953368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.953396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.953497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.953525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.953602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.953630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.953710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.953737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.953824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.953851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.953942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.953969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.954048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.954075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.954170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.954200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.954294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.807 [2024-12-09 10:39:41.954321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.807 qpair failed and we were unable to recover it. 00:29:09.807 [2024-12-09 10:39:41.954430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.954456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.954540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.954566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.954645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.954672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.954762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.954790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.954884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.954924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.955037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.955065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.955159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.955186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.955274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.955301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.955381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.955407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.955514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.955541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.955622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.955649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.955734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.955760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.955848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.955875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.955986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.956013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.956087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.956113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.956202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.956230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.956315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.956344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.956429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.956457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.956542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.956569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.956664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.956690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.956798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.956825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.956908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.956935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.957021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.957050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.957125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.957160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.957240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.957272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.957356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.957383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.957472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.957498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.957585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.957612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.957701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.957728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.957814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.957841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.957931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.957957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.958036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.958063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.958171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.808 [2024-12-09 10:39:41.958212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.808 qpair failed and we were unable to recover it. 00:29:09.808 [2024-12-09 10:39:41.958306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.958335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.958431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.958458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.958540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.958567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.958642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.958668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.958744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.958770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.958862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.958888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.959004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.959029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.959151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.959177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.959259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.959285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.959369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.959396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.959483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.959508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.959599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.959627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.959714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.959743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.959831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.959860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.959950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.959977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.960053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.960079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.960170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.960199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.960289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.960315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.960405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.960445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.960534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.960561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.960655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.960681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.960771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.960798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.960891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.960917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.960998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.961025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.961111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.961144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.961226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.961252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.961336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.961362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.961443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.961470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.961574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.961602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.961681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.961708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.961799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.961828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.961907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.961934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.962027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.962055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.962152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.962181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.962290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.962317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.962392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.962418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.962491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.962519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.809 [2024-12-09 10:39:41.962643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.809 [2024-12-09 10:39:41.962672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.809 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.962759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.962787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.962874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.962900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.962985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.963012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.963094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.963121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.963209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.963236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.963325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.963352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.963435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.963461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.963557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.963584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.963662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.963689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.963776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.963803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.963881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.963908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.963988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.964017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.964112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.964147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.964243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.964271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.964360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.964387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.964467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.964494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.964577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.964603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.964690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.964716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.964800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.964826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.964907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.964932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.965023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.965059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.965162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.965190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.965299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.965326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.965410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.965436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.965513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.965539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.965627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.965654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.965732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.965758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.965848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.965876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.965966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.965995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.966082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.966109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.966206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.966234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.966321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.966348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.966429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.966456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.966568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.966595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.966686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.966713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.966802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.966829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.966913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.966940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.810 [2024-12-09 10:39:41.967053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.810 [2024-12-09 10:39:41.967079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.810 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.967164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.967191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.967272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.967299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.967392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.967420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.967513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.967540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.967623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.967650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.967741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.967770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.967859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.967885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.967970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.967997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.968079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.968105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.968200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.968232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.968317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.968344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.968438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.968465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.968547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.968573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.968659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.968685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.968774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.968803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.968892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.968919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.969001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.969029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.969115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.969149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.969232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.969259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.969342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.969368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.969443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.969470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.969559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.969585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.969694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.969721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.969815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.969842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.969922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.969949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.970026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.970053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.970133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.970175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.970265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.970291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.970373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.970399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.970474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.970500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.970581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.970608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.970722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.970748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.970827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.970853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.970940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.970969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5294000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.971062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.971090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.971184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.971211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 A controller has encountered a failure and is being reset. 00:29:09.811 [2024-12-09 10:39:41.971314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.971342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.971433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.971460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.971540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.971566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.971655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.971681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.811 [2024-12-09 10:39:41.971770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.811 [2024-12-09 10:39:41.971797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.811 qpair failed and we were unable to recover it. 00:29:09.812 [2024-12-09 10:39:41.971878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-12-09 10:39:41.971904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-12-09 10:39:41.971983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-12-09 10:39:41.972009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-12-09 10:39:41.972089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-12-09 10:39:41.972116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-12-09 10:39:41.972217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-12-09 10:39:41.972243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-12-09 10:39:41.972338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-12-09 10:39:41.972364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f1efa0 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-12-09 10:39:41.972460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-12-09 10:39:41.972500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f52a0000b90 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-12-09 10:39:41.972605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-12-09 10:39:41.972633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5298000b90 with addr=10.0.0.2, port=4420 00:29:09.812 qpair failed and we were unable to recover it. 00:29:09.812 [2024-12-09 10:39:41.972755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:09.812 [2024-12-09 10:39:41.972803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2cf30 with addr=10.0.0.2, port=4420 00:29:09.812 [2024-12-09 10:39:41.972825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2cf30 is same with the state(6) to be set 00:29:09.812 [2024-12-09 10:39:41.972856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2cf30 (9): Bad file descriptor 00:29:09.812 [2024-12-09 10:39:41.972875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:09.812 [2024-12-09 10:39:41.972897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:09.812 [2024-12-09 10:39:41.972914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:09.812 Unable to reset the controller. 00:29:09.812 Malloc0 00:29:09.812 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.812 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:09.812 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.812 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.812 [2024-12-09 10:39:41.992043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.812 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.812 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:09.812 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.812 10:39:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.812 [2024-12-09 10:39:42.020355] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.812 10:39:42 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2657247 00:29:10.745 Controller properly reset. 00:29:16.003 Initializing NVMe Controllers 00:29:16.003 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:16.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:16.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:16.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:16.003 Initialization complete. Launching workers. 00:29:16.003 Starting thread on core 1 00:29:16.003 Starting thread on core 2 00:29:16.003 Starting thread on core 3 00:29:16.003 Starting thread on core 0 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:16.003 00:29:16.003 real 0m10.752s 00:29:16.003 user 0m34.399s 00:29:16.003 sys 0m6.886s 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:16.003 ************************************ 00:29:16.003 END TEST nvmf_target_disconnect_tc2 00:29:16.003 ************************************ 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:16.003 10:39:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:16.003 rmmod nvme_tcp 00:29:16.003 rmmod nvme_fabrics 00:29:16.003 rmmod nvme_keyring 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2657777 ']' 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2657777 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2657777 ']' 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2657777 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2657777 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2657777' 00:29:16.003 killing process with pid 2657777 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2657777 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2657777 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.003 10:39:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.564 10:39:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:18.564 00:29:18.564 real 0m15.944s 00:29:18.564 user 1m0.418s 00:29:18.564 sys 0m9.563s 00:29:18.564 10:39:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.564 10:39:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:18.564 ************************************ 00:29:18.564 END TEST nvmf_target_disconnect 00:29:18.564 ************************************ 00:29:18.564 10:39:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:18.564 00:29:18.564 real 5m7.463s 00:29:18.564 user 11m8.399s 00:29:18.564 sys 1m16.821s 00:29:18.564 10:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:18.564 10:39:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:18.565 ************************************ 00:29:18.565 END TEST nvmf_host 00:29:18.565 ************************************ 00:29:18.565 10:39:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:18.565 10:39:50 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:18.565 10:39:50 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:18.565 10:39:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:18.565 10:39:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.565 10:39:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:18.565 ************************************ 00:29:18.565 START TEST nvmf_target_core_interrupt_mode 00:29:18.565 ************************************ 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:18.565 * Looking for test storage... 00:29:18.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.565 --rc genhtml_branch_coverage=1 00:29:18.565 --rc genhtml_function_coverage=1 00:29:18.565 --rc genhtml_legend=1 00:29:18.565 --rc geninfo_all_blocks=1 00:29:18.565 --rc geninfo_unexecuted_blocks=1 00:29:18.565 00:29:18.565 ' 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.565 --rc genhtml_branch_coverage=1 00:29:18.565 --rc genhtml_function_coverage=1 00:29:18.565 --rc genhtml_legend=1 00:29:18.565 --rc geninfo_all_blocks=1 00:29:18.565 --rc geninfo_unexecuted_blocks=1 00:29:18.565 00:29:18.565 ' 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.565 --rc genhtml_branch_coverage=1 00:29:18.565 --rc genhtml_function_coverage=1 00:29:18.565 --rc genhtml_legend=1 00:29:18.565 --rc geninfo_all_blocks=1 00:29:18.565 --rc geninfo_unexecuted_blocks=1 00:29:18.565 00:29:18.565 ' 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:18.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.565 --rc genhtml_branch_coverage=1 00:29:18.565 --rc genhtml_function_coverage=1 00:29:18.565 --rc genhtml_legend=1 00:29:18.565 --rc geninfo_all_blocks=1 00:29:18.565 --rc geninfo_unexecuted_blocks=1 00:29:18.565 00:29:18.565 ' 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.565 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:18.566 ************************************ 00:29:18.566 START TEST nvmf_abort 00:29:18.566 ************************************ 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:18.566 * Looking for test storage... 00:29:18.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.566 --rc genhtml_branch_coverage=1 00:29:18.566 --rc genhtml_function_coverage=1 00:29:18.566 --rc genhtml_legend=1 00:29:18.566 --rc geninfo_all_blocks=1 00:29:18.566 --rc geninfo_unexecuted_blocks=1 00:29:18.566 00:29:18.566 ' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.566 --rc genhtml_branch_coverage=1 00:29:18.566 --rc genhtml_function_coverage=1 00:29:18.566 --rc genhtml_legend=1 00:29:18.566 --rc geninfo_all_blocks=1 00:29:18.566 --rc geninfo_unexecuted_blocks=1 00:29:18.566 00:29:18.566 ' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.566 --rc genhtml_branch_coverage=1 00:29:18.566 --rc genhtml_function_coverage=1 00:29:18.566 --rc genhtml_legend=1 00:29:18.566 --rc geninfo_all_blocks=1 00:29:18.566 --rc geninfo_unexecuted_blocks=1 00:29:18.566 00:29:18.566 ' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:18.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:18.566 --rc genhtml_branch_coverage=1 00:29:18.566 --rc genhtml_function_coverage=1 00:29:18.566 --rc genhtml_legend=1 00:29:18.566 --rc geninfo_all_blocks=1 00:29:18.566 --rc geninfo_unexecuted_blocks=1 00:29:18.566 00:29:18.566 ' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.566 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:18.567 10:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.107 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:21.108 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:21.108 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:21.108 Found net devices under 0000:09:00.0: cvl_0_0 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:21.108 Found net devices under 0000:09:00.1: cvl_0_1 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:21.108 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.109 10:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:21.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:29:21.109 00:29:21.109 --- 10.0.0.2 ping statistics --- 00:29:21.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.109 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:29:21.109 00:29:21.109 --- 10.0.0.1 ping statistics --- 00:29:21.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.109 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2660585 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2660585 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2660585 ']' 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:21.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.109 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.109 [2024-12-09 10:39:53.135306] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:21.109 [2024-12-09 10:39:53.136364] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:29:21.109 [2024-12-09 10:39:53.136420] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:21.109 [2024-12-09 10:39:53.210031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:21.109 [2024-12-09 10:39:53.269271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:21.109 [2024-12-09 10:39:53.269330] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:21.109 [2024-12-09 10:39:53.269344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:21.109 [2024-12-09 10:39:53.269355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:21.109 [2024-12-09 10:39:53.269365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:21.110 [2024-12-09 10:39:53.271049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:21.110 [2024-12-09 10:39:53.271116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:21.110 [2024-12-09 10:39:53.271119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.110 [2024-12-09 10:39:53.370667] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:21.110 [2024-12-09 10:39:53.370850] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:21.110 [2024-12-09 10:39:53.370856] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:21.110 [2024-12-09 10:39:53.371104] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.110 [2024-12-09 10:39:53.419876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.110 Malloc0 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.110 Delay0 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.110 [2024-12-09 10:39:53.492060] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.110 10:39:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:21.369 [2024-12-09 10:39:53.603004] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:23.909 Initializing NVMe Controllers 00:29:23.909 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:23.909 controller IO queue size 128 less than required 00:29:23.909 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:23.909 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:23.909 Initialization complete. Launching workers. 00:29:23.909 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29577 00:29:23.909 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29634, failed to submit 66 00:29:23.909 success 29577, unsuccessful 57, failed 0 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:23.909 rmmod nvme_tcp 00:29:23.909 rmmod nvme_fabrics 00:29:23.909 rmmod nvme_keyring 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2660585 ']' 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2660585 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2660585 ']' 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2660585 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2660585 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2660585' 00:29:23.909 killing process with pid 2660585 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2660585 00:29:23.909 10:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2660585 00:29:23.909 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.909 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.909 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.909 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:23.909 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:23.909 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.909 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.909 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.910 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.910 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.910 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.910 10:39:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.814 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:25.814 00:29:25.814 real 0m7.592s 00:29:25.814 user 0m9.910s 00:29:25.814 sys 0m2.959s 00:29:25.814 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.814 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:25.814 ************************************ 00:29:25.814 END TEST nvmf_abort 00:29:25.814 ************************************ 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:26.075 ************************************ 00:29:26.075 START TEST nvmf_ns_hotplug_stress 00:29:26.075 ************************************ 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:26.075 * Looking for test storage... 00:29:26.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:26.075 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:26.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.076 --rc genhtml_branch_coverage=1 00:29:26.076 --rc genhtml_function_coverage=1 00:29:26.076 --rc genhtml_legend=1 00:29:26.076 --rc geninfo_all_blocks=1 00:29:26.076 --rc geninfo_unexecuted_blocks=1 00:29:26.076 00:29:26.076 ' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:26.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.076 --rc genhtml_branch_coverage=1 00:29:26.076 --rc genhtml_function_coverage=1 00:29:26.076 --rc genhtml_legend=1 00:29:26.076 --rc geninfo_all_blocks=1 00:29:26.076 --rc geninfo_unexecuted_blocks=1 00:29:26.076 00:29:26.076 ' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:26.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.076 --rc genhtml_branch_coverage=1 00:29:26.076 --rc genhtml_function_coverage=1 00:29:26.076 --rc genhtml_legend=1 00:29:26.076 --rc geninfo_all_blocks=1 00:29:26.076 --rc geninfo_unexecuted_blocks=1 00:29:26.076 00:29:26.076 ' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:26.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.076 --rc genhtml_branch_coverage=1 00:29:26.076 --rc genhtml_function_coverage=1 00:29:26.076 --rc genhtml_legend=1 00:29:26.076 --rc geninfo_all_blocks=1 00:29:26.076 --rc geninfo_unexecuted_blocks=1 00:29:26.076 00:29:26.076 ' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.076 10:39:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:28.610 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:28.610 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:28.610 Found net devices under 0000:09:00.0: cvl_0_0 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:28.610 Found net devices under 0000:09:00.1: cvl_0_1 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:28.610 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:28.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:28.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:29:28.611 00:29:28.611 --- 10.0.0.2 ping statistics --- 00:29:28.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.611 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:28.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:28.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:29:28.611 00:29:28.611 --- 10.0.0.1 ping statistics --- 00:29:28.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:28.611 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2662812 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2662812 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2662812 ']' 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:28.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:28.611 [2024-12-09 10:40:00.680659] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:28.611 [2024-12-09 10:40:00.681774] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:29:28.611 [2024-12-09 10:40:00.681828] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:28.611 [2024-12-09 10:40:00.760651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:28.611 [2024-12-09 10:40:00.820042] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:28.611 [2024-12-09 10:40:00.820088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:28.611 [2024-12-09 10:40:00.820109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:28.611 [2024-12-09 10:40:00.820133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:28.611 [2024-12-09 10:40:00.820150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:28.611 [2024-12-09 10:40:00.821526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:28.611 [2024-12-09 10:40:00.821599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:28.611 [2024-12-09 10:40:00.821602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.611 [2024-12-09 10:40:00.907124] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:28.611 [2024-12-09 10:40:00.907363] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:28.611 [2024-12-09 10:40:00.907367] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:28.611 [2024-12-09 10:40:00.907630] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:29:28.611 10:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:28.868 [2024-12-09 10:40:01.210301] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.868 10:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:29.125 10:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:29.382 [2024-12-09 10:40:01.754693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:29.382 10:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:29.640 10:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:29:29.898 Malloc0 00:29:29.898 10:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:30.156 Delay0 00:29:30.414 10:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:30.672 10:40:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:29:30.930 NULL1 00:29:30.930 10:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:29:31.188 10:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2663222 00:29:31.188 10:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:31.188 10:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:31.188 10:40:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:29:32.125 Read completed with error (sct=0, sc=11) 00:29:32.384 10:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.384 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.643 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:32.643 10:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:29:32.643 10:40:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:29:32.911 true 00:29:32.911 10:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:32.911 10:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:33.476 10:40:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:33.476 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:33.734 10:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:29:33.734 10:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:29:33.992 true 00:29:33.992 10:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:33.992 10:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:34.558 10:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:34.558 10:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:29:34.558 10:40:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:29:34.816 true 00:29:34.816 10:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:34.816 10:40:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:35.747 10:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:36.004 10:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:29:36.004 10:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:29:36.261 true 00:29:36.261 10:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:36.261 10:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:36.518 10:40:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:36.776 10:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:29:36.776 10:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:29:37.035 true 00:29:37.035 10:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:37.035 10:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:37.292 10:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:37.551 10:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:29:37.551 10:40:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:29:37.809 true 00:29:37.809 10:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:37.809 10:40:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:38.741 10:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:38.998 10:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:29:38.998 10:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:29:39.256 true 00:29:39.256 10:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:39.256 10:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:39.514 10:40:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:39.773 10:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:29:39.773 10:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:29:40.030 true 00:29:40.286 10:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:40.286 10:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:40.544 10:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:40.802 10:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:29:40.802 10:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:29:41.060 true 00:29:41.060 10:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:41.060 10:40:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:41.993 10:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:42.252 10:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:29:42.252 10:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:29:42.510 true 00:29:42.510 10:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:42.510 10:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:42.767 10:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.025 10:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:29:43.025 10:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:29:43.281 true 00:29:43.281 10:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:43.281 10:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:43.538 10:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:43.795 10:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:29:43.795 10:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:29:44.052 true 00:29:44.052 10:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:44.052 10:40:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:44.984 10:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:45.242 10:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:45.242 10:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:45.499 true 00:29:45.499 10:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:45.499 10:40:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.072 10:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:46.072 10:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:46.072 10:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:46.328 true 00:29:46.328 10:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:46.328 10:40:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:46.584 10:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:47.145 10:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:47.145 10:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:47.145 true 00:29:47.145 10:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:47.145 10:40:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.072 10:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:48.328 10:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:48.328 10:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:48.586 true 00:29:48.586 10:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:48.586 10:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:48.843 10:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.100 10:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:49.100 10:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:49.358 true 00:29:49.358 10:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:49.358 10:40:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:49.616 10:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:49.873 10:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:49.873 10:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:50.132 true 00:29:50.389 10:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:50.389 10:40:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:51.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.321 10:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:51.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:51.579 10:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:51.579 10:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:51.841 true 00:29:51.841 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:51.841 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:52.137 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:52.420 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:52.421 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:52.421 true 00:29:52.421 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:52.421 10:40:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:53.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.358 10:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:53.358 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:53.614 10:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:53.615 10:40:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:53.872 true 00:29:53.872 10:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:53.872 10:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:54.129 10:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:54.385 10:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:54.385 10:40:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:54.642 true 00:29:54.642 10:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:54.642 10:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:55.571 10:40:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:55.571 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:55.829 10:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:55.829 10:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:56.086 true 00:29:56.086 10:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:56.086 10:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:56.343 10:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:56.600 10:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:56.600 10:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:56.857 true 00:29:56.857 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:56.857 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:57.114 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:57.371 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:57.371 10:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:57.629 true 00:29:57.887 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:57.887 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:58.820 10:40:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:58.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:59.078 10:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:59.078 10:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:59.336 true 00:29:59.336 10:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:29:59.336 10:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:59.594 10:40:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:59.852 10:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:59.852 10:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:00.109 true 00:30:00.109 10:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:30:00.109 10:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:00.367 10:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:00.624 10:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:00.624 10:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:00.882 true 00:30:00.882 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:30:00.882 10:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:01.816 Initializing NVMe Controllers 00:30:01.816 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:01.816 Controller IO queue size 128, less than required. 00:30:01.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.816 Controller IO queue size 128, less than required. 00:30:01.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:01.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:01.816 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:01.816 Initialization complete. Launching workers. 00:30:01.816 ======================================================== 00:30:01.816 Latency(us) 00:30:01.816 Device Information : IOPS MiB/s Average min max 00:30:01.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 582.50 0.28 97204.72 3316.59 1016676.93 00:30:01.816 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8539.90 4.17 14989.96 2162.80 535306.62 00:30:01.816 ======================================================== 00:30:01.816 Total : 9122.40 4.45 20239.68 2162.80 1016676.93 00:30:01.816 00:30:01.816 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.074 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:02.074 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:02.332 true 00:30:02.332 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2663222 00:30:02.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2663222) - No such process 00:30:02.332 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2663222 00:30:02.332 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:02.589 10:40:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:02.851 10:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:02.851 10:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:02.851 10:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:02.851 10:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:02.851 10:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:03.110 null0 00:30:03.110 10:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:03.110 10:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:03.110 10:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:03.367 null1 00:30:03.367 10:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:03.367 10:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:03.367 10:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:03.624 null2 00:30:03.624 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:03.624 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:03.624 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:03.881 null3 00:30:03.881 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:03.881 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:03.881 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:04.445 null4 00:30:04.445 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:04.445 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:04.445 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:04.445 null5 00:30:04.445 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:04.445 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:04.446 10:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:04.702 null6 00:30:04.702 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:04.702 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:04.702 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:04.960 null7 00:30:04.960 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:04.960 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:04.960 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:04.960 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.219 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:05.220 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:05.220 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:05.220 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:05.220 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:05.220 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:05.220 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:05.220 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2667242 2667243 2667245 2667247 2667249 2667251 2667253 2667255 00:30:05.220 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.220 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:05.477 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:05.477 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:05.477 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:05.477 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:05.477 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:05.477 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:05.477 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:05.477 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.734 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.734 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.734 10:40:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:05.734 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:05.991 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:05.991 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:05.991 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:05.991 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:05.991 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:05.991 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:05.991 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.991 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.249 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:06.250 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.250 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:06.250 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.250 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.250 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:06.506 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:06.506 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:06.506 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:06.506 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:06.506 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:06.506 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:06.506 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.506 10:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:06.763 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:06.763 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:06.763 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:07.019 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.019 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.020 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:07.276 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:07.276 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:07.276 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:07.276 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:07.276 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:07.276 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:07.276 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.276 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:07.532 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.532 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.532 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:07.532 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.532 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.532 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:07.532 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:07.533 10:40:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:07.790 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:07.790 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:07.790 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:07.790 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:07.790 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.790 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:07.790 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:07.790 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.050 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:08.308 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:08.308 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:08.308 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:08.308 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:08.308 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.308 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:08.308 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:08.308 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:08.566 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.566 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.566 10:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:08.567 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.567 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.567 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:08.825 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:09.083 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:09.083 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:09.083 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:09.083 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:09.083 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.083 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:09.083 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:09.083 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.341 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.342 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:09.600 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:09.600 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:09.600 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:09.600 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:09.600 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:09.600 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:09.600 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.600 10:40:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.858 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:09.859 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.859 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.859 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:09.859 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.859 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.859 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:09.859 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:09.859 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:09.859 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:10.117 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:10.117 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:10.117 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:10.117 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:10.117 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:10.117 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.117 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:10.117 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:10.375 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.375 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.376 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:10.634 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.634 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.634 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:10.634 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:10.634 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:10.634 10:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:10.892 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:10.892 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:10.892 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:10.892 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:10.892 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:10.892 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:10.892 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.892 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:11.150 rmmod nvme_tcp 00:30:11.150 rmmod nvme_fabrics 00:30:11.150 rmmod nvme_keyring 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2662812 ']' 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2662812 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2662812 ']' 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2662812 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2662812 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2662812' 00:30:11.150 killing process with pid 2662812 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2662812 00:30:11.150 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2662812 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.411 10:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:14.013 00:30:14.013 real 0m47.512s 00:30:14.013 user 3m18.752s 00:30:14.013 sys 0m22.138s 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:14.013 ************************************ 00:30:14.013 END TEST nvmf_ns_hotplug_stress 00:30:14.013 ************************************ 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:14.013 ************************************ 00:30:14.013 START TEST nvmf_delete_subsystem 00:30:14.013 ************************************ 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:14.013 * Looking for test storage... 00:30:14.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.013 --rc genhtml_branch_coverage=1 00:30:14.013 --rc genhtml_function_coverage=1 00:30:14.013 --rc genhtml_legend=1 00:30:14.013 --rc geninfo_all_blocks=1 00:30:14.013 --rc geninfo_unexecuted_blocks=1 00:30:14.013 00:30:14.013 ' 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.013 --rc genhtml_branch_coverage=1 00:30:14.013 --rc genhtml_function_coverage=1 00:30:14.013 --rc genhtml_legend=1 00:30:14.013 --rc geninfo_all_blocks=1 00:30:14.013 --rc geninfo_unexecuted_blocks=1 00:30:14.013 00:30:14.013 ' 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.013 --rc genhtml_branch_coverage=1 00:30:14.013 --rc genhtml_function_coverage=1 00:30:14.013 --rc genhtml_legend=1 00:30:14.013 --rc geninfo_all_blocks=1 00:30:14.013 --rc geninfo_unexecuted_blocks=1 00:30:14.013 00:30:14.013 ' 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:14.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:14.013 --rc genhtml_branch_coverage=1 00:30:14.013 --rc genhtml_function_coverage=1 00:30:14.013 --rc genhtml_legend=1 00:30:14.013 --rc geninfo_all_blocks=1 00:30:14.013 --rc geninfo_unexecuted_blocks=1 00:30:14.013 00:30:14.013 ' 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:14.013 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:14.014 10:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:15.934 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.934 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:15.935 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:15.935 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:15.935 Found net devices under 0000:09:00.0: cvl_0_0 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:15.935 Found net devices under 0000:09:00.1: cvl_0_1 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.935 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:15.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:30:15.936 00:30:15.936 --- 10.0.0.2 ping statistics --- 00:30:15.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.936 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:30:15.936 00:30:15.936 --- 10.0.0.1 ping statistics --- 00:30:15.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.936 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2670128 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2670128 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2670128 ']' 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.936 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:15.936 [2024-12-09 10:40:48.333050] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:15.936 [2024-12-09 10:40:48.334170] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:30:15.936 [2024-12-09 10:40:48.334231] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:16.194 [2024-12-09 10:40:48.409570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:16.194 [2024-12-09 10:40:48.467873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:16.194 [2024-12-09 10:40:48.467949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:16.194 [2024-12-09 10:40:48.467962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:16.194 [2024-12-09 10:40:48.467973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:16.194 [2024-12-09 10:40:48.467983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:16.194 [2024-12-09 10:40:48.469430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:16.194 [2024-12-09 10:40:48.469435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:16.194 [2024-12-09 10:40:48.566856] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:16.194 [2024-12-09 10:40:48.566891] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:16.194 [2024-12-09 10:40:48.567118] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.194 [2024-12-09 10:40:48.618075] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.194 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.194 [2024-12-09 10:40:48.634358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.451 NULL1 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.451 Delay0 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2670148 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:16.451 10:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:16.451 [2024-12-09 10:40:48.715455] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:18.341 10:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:18.341 10:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.341 10:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 starting I/O failed: -6 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 starting I/O failed: -6 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 starting I/O failed: -6 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 starting I/O failed: -6 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 starting I/O failed: -6 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 starting I/O failed: -6 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 starting I/O failed: -6 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 starting I/O failed: -6 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 Write completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 starting I/O failed: -6 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 starting I/O failed: -6 00:30:18.598 Read completed with error (sct=0, sc=8) 00:30:18.598 [2024-12-09 10:40:50.851680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc86800d680 is same with the state(6) to be set 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 starting I/O failed: -6 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 starting I/O failed: -6 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 starting I/O failed: -6 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 starting I/O failed: -6 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 starting I/O failed: -6 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 starting I/O failed: -6 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 starting I/O failed: -6 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 starting I/O failed: -6 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 starting I/O failed: -6 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 starting I/O failed: -6 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 starting I/O failed: -6 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 [2024-12-09 10:40:50.852456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a82c0 is same with the state(6) to be set 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:18.599 Write completed with error (sct=0, sc=8) 00:30:18.599 Read completed with error (sct=0, sc=8) 00:30:19.531 [2024-12-09 10:40:51.811068] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a99b0 is same with the state(6) to be set 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 [2024-12-09 10:40:51.852853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc868000c40 is same with the state(6) to be set 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 [2024-12-09 10:40:51.853080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc86800d350 is same with the state(6) to be set 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 [2024-12-09 10:40:51.853524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a8860 is same with the state(6) to be set 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 Write completed with error (sct=0, sc=8) 00:30:19.531 Read completed with error (sct=0, sc=8) 00:30:19.531 [2024-12-09 10:40:51.853706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a84a0 is same with the state(6) to be set 00:30:19.531 10:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.531 Initializing NVMe Controllers 00:30:19.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:19.531 Controller IO queue size 128, less than required. 00:30:19.531 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:19.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:19.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:19.531 Initialization complete. Launching workers. 00:30:19.531 ======================================================== 00:30:19.531 Latency(us) 00:30:19.531 Device Information : IOPS MiB/s Average min max 00:30:19.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.74 0.08 906519.67 416.98 1012970.40 00:30:19.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.29 0.08 924460.68 690.37 1011586.85 00:30:19.531 ======================================================== 00:30:19.531 Total : 323.04 0.16 915311.04 416.98 1012970.40 00:30:19.531 00:30:19.531 10:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:19.531 10:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2670148 00:30:19.531 [2024-12-09 10:40:51.854742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14a99b0 (9): Bad file descriptor 00:30:19.531 10:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:19.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2670148 00:30:20.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2670148) - No such process 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2670148 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2670148 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2670148 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:20.099 [2024-12-09 10:40:52.374313] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2670555 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2670555 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:20.099 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:20.099 [2024-12-09 10:40:52.437031] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:20.665 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:20.665 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2670555 00:30:20.666 10:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:21.230 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:21.230 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2670555 00:30:21.230 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:21.488 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:21.488 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2670555 00:30:21.488 10:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:22.054 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:22.055 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2670555 00:30:22.055 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:22.621 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:22.621 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2670555 00:30:22.621 10:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:23.184 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:23.184 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2670555 00:30:23.184 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:23.184 Initializing NVMe Controllers 00:30:23.184 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.184 Controller IO queue size 128, less than required. 00:30:23.184 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:23.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:23.184 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:23.184 Initialization complete. Launching workers. 00:30:23.184 ======================================================== 00:30:23.184 Latency(us) 00:30:23.184 Device Information : IOPS MiB/s Average min max 00:30:23.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003837.46 1000236.79 1041757.07 00:30:23.184 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005836.36 1000209.57 1041596.02 00:30:23.184 ======================================================== 00:30:23.184 Total : 256.00 0.12 1004836.91 1000209.57 1041757.07 00:30:23.184 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2670555 00:30:23.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2670555) - No such process 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2670555 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:23.749 rmmod nvme_tcp 00:30:23.749 rmmod nvme_fabrics 00:30:23.749 rmmod nvme_keyring 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2670128 ']' 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2670128 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2670128 ']' 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2670128 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2670128 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2670128' 00:30:23.749 killing process with pid 2670128 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2670128 00:30:23.749 10:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2670128 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.009 10:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:25.911 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:25.911 00:30:25.911 real 0m12.468s 00:30:25.911 user 0m24.855s 00:30:25.911 sys 0m3.753s 00:30:25.911 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:25.911 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:25.911 ************************************ 00:30:25.911 END TEST nvmf_delete_subsystem 00:30:25.911 ************************************ 00:30:25.911 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:25.911 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:25.911 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:25.911 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:26.168 ************************************ 00:30:26.168 START TEST nvmf_host_management 00:30:26.168 ************************************ 00:30:26.168 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:26.168 * Looking for test storage... 00:30:26.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:26.168 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:26.168 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:30:26.168 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:26.168 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:26.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.169 --rc genhtml_branch_coverage=1 00:30:26.169 --rc genhtml_function_coverage=1 00:30:26.169 --rc genhtml_legend=1 00:30:26.169 --rc geninfo_all_blocks=1 00:30:26.169 --rc geninfo_unexecuted_blocks=1 00:30:26.169 00:30:26.169 ' 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:26.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.169 --rc genhtml_branch_coverage=1 00:30:26.169 --rc genhtml_function_coverage=1 00:30:26.169 --rc genhtml_legend=1 00:30:26.169 --rc geninfo_all_blocks=1 00:30:26.169 --rc geninfo_unexecuted_blocks=1 00:30:26.169 00:30:26.169 ' 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:26.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.169 --rc genhtml_branch_coverage=1 00:30:26.169 --rc genhtml_function_coverage=1 00:30:26.169 --rc genhtml_legend=1 00:30:26.169 --rc geninfo_all_blocks=1 00:30:26.169 --rc geninfo_unexecuted_blocks=1 00:30:26.169 00:30:26.169 ' 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:26.169 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:26.169 --rc genhtml_branch_coverage=1 00:30:26.169 --rc genhtml_function_coverage=1 00:30:26.169 --rc genhtml_legend=1 00:30:26.169 --rc geninfo_all_blocks=1 00:30:26.169 --rc geninfo_unexecuted_blocks=1 00:30:26.169 00:30:26.169 ' 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:26.169 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:26.170 10:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:28.740 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:28.740 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:28.740 Found net devices under 0000:09:00.0: cvl_0_0 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:28.740 Found net devices under 0000:09:00.1: cvl_0_1 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:28.740 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:28.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:28.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:30:28.741 00:30:28.741 --- 10.0.0.2 ping statistics --- 00:30:28.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.741 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:28.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:28.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:30:28.741 00:30:28.741 --- 10.0.0.1 ping statistics --- 00:30:28.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:28.741 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2673014 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2673014 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2673014 ']' 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.741 10:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.741 [2024-12-09 10:41:00.881829] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:28.741 [2024-12-09 10:41:00.882871] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:30:28.741 [2024-12-09 10:41:00.882938] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:28.741 [2024-12-09 10:41:00.953728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:28.741 [2024-12-09 10:41:01.012695] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:28.741 [2024-12-09 10:41:01.012762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:28.741 [2024-12-09 10:41:01.012785] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:28.741 [2024-12-09 10:41:01.012796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:28.741 [2024-12-09 10:41:01.012805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:28.741 [2024-12-09 10:41:01.014399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:28.741 [2024-12-09 10:41:01.014453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.741 [2024-12-09 10:41:01.014570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:28.741 [2024-12-09 10:41:01.014572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.741 [2024-12-09 10:41:01.104159] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:28.741 [2024-12-09 10:41:01.104370] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:28.741 [2024-12-09 10:41:01.104677] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:28.741 [2024-12-09 10:41:01.105351] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:28.741 [2024-12-09 10:41:01.105588] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:28.741 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.741 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:28.741 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:28.741 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:28.741 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.741 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:28.741 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:28.741 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.741 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.741 [2024-12-09 10:41:01.163244] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.998 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.998 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:30:28.998 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.999 Malloc0 00:30:28.999 [2024-12-09 10:41:01.239452] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2673056 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2673056 /var/tmp/bdevperf.sock 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2673056 ']' 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:28.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:28.999 { 00:30:28.999 "params": { 00:30:28.999 "name": "Nvme$subsystem", 00:30:28.999 "trtype": "$TEST_TRANSPORT", 00:30:28.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:28.999 "adrfam": "ipv4", 00:30:28.999 "trsvcid": "$NVMF_PORT", 00:30:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:28.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:28.999 "hdgst": ${hdgst:-false}, 00:30:28.999 "ddgst": ${ddgst:-false} 00:30:28.999 }, 00:30:28.999 "method": "bdev_nvme_attach_controller" 00:30:28.999 } 00:30:28.999 EOF 00:30:28.999 )") 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:28.999 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:28.999 "params": { 00:30:28.999 "name": "Nvme0", 00:30:28.999 "trtype": "tcp", 00:30:28.999 "traddr": "10.0.0.2", 00:30:28.999 "adrfam": "ipv4", 00:30:28.999 "trsvcid": "4420", 00:30:28.999 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:28.999 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:28.999 "hdgst": false, 00:30:28.999 "ddgst": false 00:30:28.999 }, 00:30:28.999 "method": "bdev_nvme_attach_controller" 00:30:28.999 }' 00:30:28.999 [2024-12-09 10:41:01.325994] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:30:28.999 [2024-12-09 10:41:01.326084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673056 ] 00:30:28.999 [2024-12-09 10:41:01.394959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.256 [2024-12-09 10:41:01.454518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.513 Running I/O for 10 seconds... 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:30:29.513 10:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.774 [2024-12-09 10:41:02.151274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 [2024-12-09 10:41:02.151762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb0ec80 is same with the state(6) to be set 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.774 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:29.774 [2024-12-09 10:41:02.158630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.774 [2024-12-09 10:41:02.158690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.774 [2024-12-09 10:41:02.158719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.774 [2024-12-09 10:41:02.158735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.774 [2024-12-09 10:41:02.158752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.774 [2024-12-09 10:41:02.158765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.774 [2024-12-09 10:41:02.158781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.774 [2024-12-09 10:41:02.158793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.774 [2024-12-09 10:41:02.158808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.158821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.158836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.158850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.158870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.158885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.158900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.158912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.158928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.158941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.158956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.158969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.158984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.158997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.775 [2024-12-09 10:41:02.159917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.775 [2024-12-09 10:41:02.159930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.159944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.159957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.159972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.159989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:29.776 [2024-12-09 10:41:02.160629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.776 [2024-12-09 10:41:02.160812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.776 [2024-12-09 10:41:02.160835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.776 [2024-12-09 10:41:02.160865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.776 [2024-12-09 10:41:02.160892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:29.776 [2024-12-09 10:41:02.160919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:29.776 [2024-12-09 10:41:02.160938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5c2660 is same with the state(6) to be set 00:30:29.776 [2024-12-09 10:41:02.162049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:29.776 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.776 10:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:30:29.776 task offset: 81920 on job bdev=Nvme0n1 fails 00:30:29.776 00:30:29.776 Latency(us) 00:30:29.776 [2024-12-09T09:41:02.217Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:29.776 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:29.776 Job: Nvme0n1 ended in about 0.40 seconds with error 00:30:29.776 Verification LBA range: start 0x0 length 0x400 00:30:29.776 Nvme0n1 : 0.40 1580.75 98.80 158.07 0.00 35750.78 2949.12 34369.99 00:30:29.776 [2024-12-09T09:41:02.217Z] =================================================================================================================== 00:30:29.776 [2024-12-09T09:41:02.217Z] Total : 1580.75 98.80 158.07 0.00 35750.78 2949.12 34369.99 00:30:29.776 [2024-12-09 10:41:02.163939] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:30:29.776 [2024-12-09 10:41:02.163966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5c2660 (9): Bad file descriptor 00:30:29.776 [2024-12-09 10:41:02.209506] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2673056 00:30:31.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2673056) - No such process 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:30:31.146 { 00:30:31.146 "params": { 00:30:31.146 "name": "Nvme$subsystem", 00:30:31.146 "trtype": "$TEST_TRANSPORT", 00:30:31.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:31.146 "adrfam": "ipv4", 00:30:31.146 "trsvcid": "$NVMF_PORT", 00:30:31.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:31.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:31.146 "hdgst": ${hdgst:-false}, 00:30:31.146 "ddgst": ${ddgst:-false} 00:30:31.146 }, 00:30:31.146 "method": "bdev_nvme_attach_controller" 00:30:31.146 } 00:30:31.146 EOF 00:30:31.146 )") 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:30:31.146 10:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:30:31.146 "params": { 00:30:31.146 "name": "Nvme0", 00:30:31.146 "trtype": "tcp", 00:30:31.146 "traddr": "10.0.0.2", 00:30:31.146 "adrfam": "ipv4", 00:30:31.146 "trsvcid": "4420", 00:30:31.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:31.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:31.146 "hdgst": false, 00:30:31.146 "ddgst": false 00:30:31.146 }, 00:30:31.146 "method": "bdev_nvme_attach_controller" 00:30:31.146 }' 00:30:31.146 [2024-12-09 10:41:03.210850] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:30:31.146 [2024-12-09 10:41:03.210941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673333 ] 00:30:31.146 [2024-12-09 10:41:03.280044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.146 [2024-12-09 10:41:03.339557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.146 Running I/O for 1 seconds... 00:30:32.518 1664.00 IOPS, 104.00 MiB/s 00:30:32.519 Latency(us) 00:30:32.519 [2024-12-09T09:41:04.960Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.519 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:30:32.519 Verification LBA range: start 0x0 length 0x400 00:30:32.519 Nvme0n1 : 1.01 1703.68 106.48 0.00 0.00 36953.60 6844.87 33593.27 00:30:32.519 [2024-12-09T09:41:04.960Z] =================================================================================================================== 00:30:32.519 [2024-12-09T09:41:04.960Z] Total : 1703.68 106.48 0.00 0.00 36953.60 6844.87 33593.27 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:32.519 rmmod nvme_tcp 00:30:32.519 rmmod nvme_fabrics 00:30:32.519 rmmod nvme_keyring 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2673014 ']' 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2673014 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2673014 ']' 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2673014 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2673014 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2673014' 00:30:32.519 killing process with pid 2673014 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2673014 00:30:32.519 10:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2673014 00:30:32.779 [2024-12-09 10:41:05.171213] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:32.779 10:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:30:35.313 00:30:35.313 real 0m8.890s 00:30:35.313 user 0m17.749s 00:30:35.313 sys 0m3.730s 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:30:35.313 ************************************ 00:30:35.313 END TEST nvmf_host_management 00:30:35.313 ************************************ 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:35.313 ************************************ 00:30:35.313 START TEST nvmf_lvol 00:30:35.313 ************************************ 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:30:35.313 * Looking for test storage... 00:30:35.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:30:35.313 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:35.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.314 --rc genhtml_branch_coverage=1 00:30:35.314 --rc genhtml_function_coverage=1 00:30:35.314 --rc genhtml_legend=1 00:30:35.314 --rc geninfo_all_blocks=1 00:30:35.314 --rc geninfo_unexecuted_blocks=1 00:30:35.314 00:30:35.314 ' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:35.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.314 --rc genhtml_branch_coverage=1 00:30:35.314 --rc genhtml_function_coverage=1 00:30:35.314 --rc genhtml_legend=1 00:30:35.314 --rc geninfo_all_blocks=1 00:30:35.314 --rc geninfo_unexecuted_blocks=1 00:30:35.314 00:30:35.314 ' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:35.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.314 --rc genhtml_branch_coverage=1 00:30:35.314 --rc genhtml_function_coverage=1 00:30:35.314 --rc genhtml_legend=1 00:30:35.314 --rc geninfo_all_blocks=1 00:30:35.314 --rc geninfo_unexecuted_blocks=1 00:30:35.314 00:30:35.314 ' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:35.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:35.314 --rc genhtml_branch_coverage=1 00:30:35.314 --rc genhtml_function_coverage=1 00:30:35.314 --rc genhtml_legend=1 00:30:35.314 --rc geninfo_all_blocks=1 00:30:35.314 --rc geninfo_unexecuted_blocks=1 00:30:35.314 00:30:35.314 ' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.314 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:35.315 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:35.315 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:35.315 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.315 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:35.315 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.315 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:35.315 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:35.315 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:30:35.315 10:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:37.212 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:37.212 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:37.212 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:37.213 Found net devices under 0000:09:00.0: cvl_0_0 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:37.213 Found net devices under 0000:09:00.1: cvl_0_1 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.213 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:37.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:30:37.471 00:30:37.471 --- 10.0.0.2 ping statistics --- 00:30:37.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.471 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:30:37.471 00:30:37.471 --- 10.0.0.1 ping statistics --- 00:30:37.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.471 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2675453 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2675453 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2675453 ']' 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.471 10:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:37.471 [2024-12-09 10:41:09.838442] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:37.471 [2024-12-09 10:41:09.839508] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:30:37.471 [2024-12-09 10:41:09.839564] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.471 [2024-12-09 10:41:09.911008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:37.729 [2024-12-09 10:41:09.965712] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.729 [2024-12-09 10:41:09.965767] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.729 [2024-12-09 10:41:09.965796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.729 [2024-12-09 10:41:09.965808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.729 [2024-12-09 10:41:09.965817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.729 [2024-12-09 10:41:09.967402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.729 [2024-12-09 10:41:09.967529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:37.729 [2024-12-09 10:41:09.967532] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.729 [2024-12-09 10:41:10.064270] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:37.729 [2024-12-09 10:41:10.064486] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:37.729 [2024-12-09 10:41:10.064526] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:37.729 [2024-12-09 10:41:10.064766] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:37.729 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.729 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:37.729 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:37.729 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:37.729 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:37.729 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.729 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:37.986 [2024-12-09 10:41:10.376257] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.986 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:38.551 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:38.551 10:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:38.809 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:38.809 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:39.067 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:39.325 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=aa96e5ed-7246-478d-8de4-ac488b551daa 00:30:39.325 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u aa96e5ed-7246-478d-8de4-ac488b551daa lvol 20 00:30:39.582 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=da7a9ee1-1012-4aff-814f-be370887824a 00:30:39.582 10:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:39.840 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 da7a9ee1-1012-4aff-814f-be370887824a 00:30:40.098 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.356 [2024-12-09 10:41:12.632405] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.356 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.614 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2675844 00:30:40.614 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:40.614 10:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:41.549 10:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot da7a9ee1-1012-4aff-814f-be370887824a MY_SNAPSHOT 00:30:42.114 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=fd80b4b8-b9ad-4798-a198-3ef06607798f 00:30:42.114 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize da7a9ee1-1012-4aff-814f-be370887824a 30 00:30:42.372 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone fd80b4b8-b9ad-4798-a198-3ef06607798f MY_CLONE 00:30:42.629 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4ca77f6f-3a7e-4fa1-811c-140e50b15a7d 00:30:42.629 10:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4ca77f6f-3a7e-4fa1-811c-140e50b15a7d 00:30:43.195 10:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2675844 00:30:51.360 Initializing NVMe Controllers 00:30:51.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:51.360 Controller IO queue size 128, less than required. 00:30:51.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:51.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:51.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:51.360 Initialization complete. Launching workers. 00:30:51.360 ======================================================== 00:30:51.360 Latency(us) 00:30:51.360 Device Information : IOPS MiB/s Average min max 00:30:51.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10570.90 41.29 12112.90 5760.76 58892.87 00:30:51.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10305.10 40.25 12424.92 4381.58 52569.47 00:30:51.360 ======================================================== 00:30:51.360 Total : 20876.00 81.55 12266.92 4381.58 58892.87 00:30:51.360 00:30:51.360 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:51.360 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete da7a9ee1-1012-4aff-814f-be370887824a 00:30:51.618 10:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa96e5ed-7246-478d-8de4-ac488b551daa 00:30:51.875 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:51.875 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:51.875 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:51.875 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:51.875 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:51.875 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:51.875 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:51.875 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:51.875 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:51.876 rmmod nvme_tcp 00:30:51.876 rmmod nvme_fabrics 00:30:51.876 rmmod nvme_keyring 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2675453 ']' 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2675453 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2675453 ']' 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2675453 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2675453 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2675453' 00:30:51.876 killing process with pid 2675453 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2675453 00:30:51.876 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2675453 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:52.135 10:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.667 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:54.667 00:30:54.667 real 0m19.242s 00:30:54.667 user 0m56.389s 00:30:54.667 sys 0m7.774s 00:30:54.667 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:54.667 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:54.667 ************************************ 00:30:54.667 END TEST nvmf_lvol 00:30:54.667 ************************************ 00:30:54.667 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:54.667 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:54.667 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:54.667 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:54.667 ************************************ 00:30:54.667 START TEST nvmf_lvs_grow 00:30:54.667 ************************************ 00:30:54.667 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:54.667 * Looking for test storage... 00:30:54.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:54.667 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:54.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.668 --rc genhtml_branch_coverage=1 00:30:54.668 --rc genhtml_function_coverage=1 00:30:54.668 --rc genhtml_legend=1 00:30:54.668 --rc geninfo_all_blocks=1 00:30:54.668 --rc geninfo_unexecuted_blocks=1 00:30:54.668 00:30:54.668 ' 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:54.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.668 --rc genhtml_branch_coverage=1 00:30:54.668 --rc genhtml_function_coverage=1 00:30:54.668 --rc genhtml_legend=1 00:30:54.668 --rc geninfo_all_blocks=1 00:30:54.668 --rc geninfo_unexecuted_blocks=1 00:30:54.668 00:30:54.668 ' 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:54.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.668 --rc genhtml_branch_coverage=1 00:30:54.668 --rc genhtml_function_coverage=1 00:30:54.668 --rc genhtml_legend=1 00:30:54.668 --rc geninfo_all_blocks=1 00:30:54.668 --rc geninfo_unexecuted_blocks=1 00:30:54.668 00:30:54.668 ' 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:54.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.668 --rc genhtml_branch_coverage=1 00:30:54.668 --rc genhtml_function_coverage=1 00:30:54.668 --rc genhtml_legend=1 00:30:54.668 --rc geninfo_all_blocks=1 00:30:54.668 --rc geninfo_unexecuted_blocks=1 00:30:54.668 00:30:54.668 ' 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:54.668 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:54.669 10:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:56.569 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:56.569 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:56.569 Found net devices under 0000:09:00.0: cvl_0_0 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:56.569 Found net devices under 0000:09:00.1: cvl_0_1 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.569 10:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:56.828 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.828 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:30:56.828 00:30:56.828 --- 10.0.0.2 ping statistics --- 00:30:56.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.828 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.828 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:30:56.828 00:30:56.828 --- 10.0.0.1 ping statistics --- 00:30:56.828 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.828 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2679220 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2679220 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2679220 ']' 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.828 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:56.828 [2024-12-09 10:41:29.206278] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:56.828 [2024-12-09 10:41:29.207556] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:30:56.828 [2024-12-09 10:41:29.207617] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:57.086 [2024-12-09 10:41:29.286891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.086 [2024-12-09 10:41:29.346437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.086 [2024-12-09 10:41:29.346504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.086 [2024-12-09 10:41:29.346542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:57.086 [2024-12-09 10:41:29.346553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:57.086 [2024-12-09 10:41:29.346563] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.086 [2024-12-09 10:41:29.347173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.086 [2024-12-09 10:41:29.447069] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:57.086 [2024-12-09 10:41:29.447400] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:57.086 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:57.086 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:57.086 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:57.086 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:57.086 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:57.086 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.086 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:57.653 [2024-12-09 10:41:29.795829] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:57.654 ************************************ 00:30:57.654 START TEST lvs_grow_clean 00:30:57.654 ************************************ 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:57.654 10:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:57.912 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:57.912 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:58.170 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=86584f36-4e59-4698-95d7-b56f3a9be1a6 00:30:58.170 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 00:30:58.170 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:58.428 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:58.428 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:58.428 10:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 lvol 150 00:30:58.685 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=76405865-cb45-443e-80fe-ca63d7bc213e 00:30:58.685 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:58.686 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:58.943 [2024-12-09 10:41:31.331727] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:58.943 [2024-12-09 10:41:31.331832] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:58.943 true 00:30:58.943 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 00:30:58.943 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:59.201 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:59.201 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:59.767 10:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 76405865-cb45-443e-80fe-ca63d7bc213e 00:30:59.767 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:00.026 [2024-12-09 10:41:32.432020] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.026 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:00.629 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2679673 00:31:00.629 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:00.629 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:00.629 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2679673 /var/tmp/bdevperf.sock 00:31:00.629 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2679673 ']' 00:31:00.629 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:00.629 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.629 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:00.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:00.629 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.629 10:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:00.629 [2024-12-09 10:41:32.780097] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:31:00.629 [2024-12-09 10:41:32.780213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679673 ] 00:31:00.629 [2024-12-09 10:41:32.857645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.629 [2024-12-09 10:41:32.917879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.629 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.629 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:31:00.629 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:01.194 Nvme0n1 00:31:01.194 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:01.452 [ 00:31:01.452 { 00:31:01.452 "name": "Nvme0n1", 00:31:01.452 "aliases": [ 00:31:01.452 "76405865-cb45-443e-80fe-ca63d7bc213e" 00:31:01.452 ], 00:31:01.452 "product_name": "NVMe disk", 00:31:01.452 "block_size": 4096, 00:31:01.452 "num_blocks": 38912, 00:31:01.452 "uuid": "76405865-cb45-443e-80fe-ca63d7bc213e", 00:31:01.452 "numa_id": 0, 00:31:01.452 "assigned_rate_limits": { 00:31:01.452 "rw_ios_per_sec": 0, 00:31:01.452 "rw_mbytes_per_sec": 0, 00:31:01.452 "r_mbytes_per_sec": 0, 00:31:01.452 "w_mbytes_per_sec": 0 00:31:01.452 }, 00:31:01.452 "claimed": false, 00:31:01.452 "zoned": false, 00:31:01.452 "supported_io_types": { 00:31:01.452 "read": true, 00:31:01.452 "write": true, 00:31:01.452 "unmap": true, 00:31:01.452 "flush": true, 00:31:01.452 "reset": true, 00:31:01.452 "nvme_admin": true, 00:31:01.452 "nvme_io": true, 00:31:01.452 "nvme_io_md": false, 00:31:01.452 "write_zeroes": true, 00:31:01.452 "zcopy": false, 00:31:01.452 "get_zone_info": false, 00:31:01.452 "zone_management": false, 00:31:01.452 "zone_append": false, 00:31:01.452 "compare": true, 00:31:01.452 "compare_and_write": true, 00:31:01.452 "abort": true, 00:31:01.452 "seek_hole": false, 00:31:01.452 "seek_data": false, 00:31:01.452 "copy": true, 00:31:01.452 "nvme_iov_md": false 00:31:01.452 }, 00:31:01.452 "memory_domains": [ 00:31:01.452 { 00:31:01.452 "dma_device_id": "system", 00:31:01.452 "dma_device_type": 1 00:31:01.452 } 00:31:01.452 ], 00:31:01.452 "driver_specific": { 00:31:01.452 "nvme": [ 00:31:01.452 { 00:31:01.452 "trid": { 00:31:01.452 "trtype": "TCP", 00:31:01.452 "adrfam": "IPv4", 00:31:01.452 "traddr": "10.0.0.2", 00:31:01.452 "trsvcid": "4420", 00:31:01.452 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:01.452 }, 00:31:01.452 "ctrlr_data": { 00:31:01.452 "cntlid": 1, 00:31:01.452 "vendor_id": "0x8086", 00:31:01.452 "model_number": "SPDK bdev Controller", 00:31:01.452 "serial_number": "SPDK0", 00:31:01.452 "firmware_revision": "25.01", 00:31:01.452 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.452 "oacs": { 00:31:01.452 "security": 0, 00:31:01.452 "format": 0, 00:31:01.452 "firmware": 0, 00:31:01.452 "ns_manage": 0 00:31:01.452 }, 00:31:01.452 "multi_ctrlr": true, 00:31:01.452 "ana_reporting": false 00:31:01.452 }, 00:31:01.452 "vs": { 00:31:01.452 "nvme_version": "1.3" 00:31:01.452 }, 00:31:01.452 "ns_data": { 00:31:01.452 "id": 1, 00:31:01.452 "can_share": true 00:31:01.452 } 00:31:01.452 } 00:31:01.452 ], 00:31:01.452 "mp_policy": "active_passive" 00:31:01.452 } 00:31:01.452 } 00:31:01.452 ] 00:31:01.452 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2679809 00:31:01.452 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:01.452 10:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:01.453 Running I/O for 10 seconds... 00:31:02.829 Latency(us) 00:31:02.829 [2024-12-09T09:41:35.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:02.829 Nvme0n1 : 1.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:31:02.829 [2024-12-09T09:41:35.270Z] =================================================================================================================== 00:31:02.829 [2024-12-09T09:41:35.270Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:31:02.829 00:31:03.395 10:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 00:31:03.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:03.653 Nvme0n1 : 2.00 15303.50 59.78 0.00 0.00 0.00 0.00 0.00 00:31:03.653 [2024-12-09T09:41:36.094Z] =================================================================================================================== 00:31:03.653 [2024-12-09T09:41:36.094Z] Total : 15303.50 59.78 0.00 0.00 0.00 0.00 0.00 00:31:03.653 00:31:03.653 true 00:31:03.653 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 00:31:03.653 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:03.911 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:03.911 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:03.911 10:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2679809 00:31:04.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:04.477 Nvme0n1 : 3.00 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:31:04.477 [2024-12-09T09:41:36.918Z] =================================================================================================================== 00:31:04.477 [2024-12-09T09:41:36.918Z] Total : 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:31:04.477 00:31:05.850 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:05.850 Nvme0n1 : 4.00 15398.75 60.15 0.00 0.00 0.00 0.00 0.00 00:31:05.850 [2024-12-09T09:41:38.291Z] =================================================================================================================== 00:31:05.850 [2024-12-09T09:41:38.291Z] Total : 15398.75 60.15 0.00 0.00 0.00 0.00 0.00 00:31:05.850 00:31:06.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:06.417 Nvme0n1 : 5.00 15446.60 60.34 0.00 0.00 0.00 0.00 0.00 00:31:06.417 [2024-12-09T09:41:38.858Z] =================================================================================================================== 00:31:06.417 [2024-12-09T09:41:38.858Z] Total : 15446.60 60.34 0.00 0.00 0.00 0.00 0.00 00:31:06.417 00:31:07.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:07.792 Nvme0n1 : 6.00 15518.00 60.62 0.00 0.00 0.00 0.00 0.00 00:31:07.792 [2024-12-09T09:41:40.233Z] =================================================================================================================== 00:31:07.792 [2024-12-09T09:41:40.233Z] Total : 15518.00 60.62 0.00 0.00 0.00 0.00 0.00 00:31:07.792 00:31:08.734 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:08.734 Nvme0n1 : 7.00 15532.71 60.67 0.00 0.00 0.00 0.00 0.00 00:31:08.734 [2024-12-09T09:41:41.175Z] =================================================================================================================== 00:31:08.734 [2024-12-09T09:41:41.175Z] Total : 15532.71 60.67 0.00 0.00 0.00 0.00 0.00 00:31:08.734 00:31:09.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:09.666 Nvme0n1 : 8.00 15575.50 60.84 0.00 0.00 0.00 0.00 0.00 00:31:09.666 [2024-12-09T09:41:42.107Z] =================================================================================================================== 00:31:09.666 [2024-12-09T09:41:42.107Z] Total : 15575.50 60.84 0.00 0.00 0.00 0.00 0.00 00:31:09.666 00:31:10.601 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:10.601 Nvme0n1 : 9.00 15622.89 61.03 0.00 0.00 0.00 0.00 0.00 00:31:10.602 [2024-12-09T09:41:43.043Z] =================================================================================================================== 00:31:10.602 [2024-12-09T09:41:43.043Z] Total : 15622.89 61.03 0.00 0.00 0.00 0.00 0.00 00:31:10.602 00:31:11.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:11.555 Nvme0n1 : 10.00 15629.10 61.05 0.00 0.00 0.00 0.00 0.00 00:31:11.555 [2024-12-09T09:41:43.996Z] =================================================================================================================== 00:31:11.555 [2024-12-09T09:41:43.996Z] Total : 15629.10 61.05 0.00 0.00 0.00 0.00 0.00 00:31:11.555 00:31:11.555 00:31:11.555 Latency(us) 00:31:11.555 [2024-12-09T09:41:43.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.555 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:11.555 Nvme0n1 : 10.01 15634.13 61.07 0.00 0.00 8181.84 4369.07 17670.45 00:31:11.555 [2024-12-09T09:41:43.996Z] =================================================================================================================== 00:31:11.555 [2024-12-09T09:41:43.996Z] Total : 15634.13 61.07 0.00 0.00 8181.84 4369.07 17670.45 00:31:11.555 { 00:31:11.555 "results": [ 00:31:11.555 { 00:31:11.555 "job": "Nvme0n1", 00:31:11.555 "core_mask": "0x2", 00:31:11.555 "workload": "randwrite", 00:31:11.555 "status": "finished", 00:31:11.555 "queue_depth": 128, 00:31:11.555 "io_size": 4096, 00:31:11.555 "runtime": 10.008997, 00:31:11.555 "iops": 15634.13396966749, 00:31:11.555 "mibps": 61.07083581901363, 00:31:11.555 "io_failed": 0, 00:31:11.555 "io_timeout": 0, 00:31:11.555 "avg_latency_us": 8181.839704502755, 00:31:11.555 "min_latency_us": 4369.066666666667, 00:31:11.555 "max_latency_us": 17670.447407407406 00:31:11.555 } 00:31:11.555 ], 00:31:11.555 "core_count": 1 00:31:11.555 } 00:31:11.555 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2679673 00:31:11.555 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2679673 ']' 00:31:11.555 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2679673 00:31:11.555 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:31:11.555 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:11.555 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2679673 00:31:11.555 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:11.555 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:11.555 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2679673' 00:31:11.555 killing process with pid 2679673 00:31:11.555 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2679673 00:31:11.555 Received shutdown signal, test time was about 10.000000 seconds 00:31:11.555 00:31:11.555 Latency(us) 00:31:11.555 [2024-12-09T09:41:43.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.555 [2024-12-09T09:41:43.996Z] =================================================================================================================== 00:31:11.555 [2024-12-09T09:41:43.996Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:11.555 10:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2679673 00:31:11.813 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:12.381 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:12.637 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 00:31:12.637 10:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:12.895 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:12.895 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:12.895 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:13.152 [2024-12-09 10:41:45.407779] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:13.152 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 00:31:13.410 request: 00:31:13.410 { 00:31:13.410 "uuid": "86584f36-4e59-4698-95d7-b56f3a9be1a6", 00:31:13.410 "method": "bdev_lvol_get_lvstores", 00:31:13.410 "req_id": 1 00:31:13.410 } 00:31:13.410 Got JSON-RPC error response 00:31:13.410 response: 00:31:13.410 { 00:31:13.410 "code": -19, 00:31:13.410 "message": "No such device" 00:31:13.410 } 00:31:13.410 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:31:13.410 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:13.410 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:13.410 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:13.410 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:13.667 aio_bdev 00:31:13.667 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 76405865-cb45-443e-80fe-ca63d7bc213e 00:31:13.667 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=76405865-cb45-443e-80fe-ca63d7bc213e 00:31:13.667 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:13.667 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:31:13.667 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:13.667 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:13.667 10:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:13.924 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 76405865-cb45-443e-80fe-ca63d7bc213e -t 2000 00:31:14.182 [ 00:31:14.182 { 00:31:14.182 "name": "76405865-cb45-443e-80fe-ca63d7bc213e", 00:31:14.182 "aliases": [ 00:31:14.182 "lvs/lvol" 00:31:14.182 ], 00:31:14.182 "product_name": "Logical Volume", 00:31:14.182 "block_size": 4096, 00:31:14.182 "num_blocks": 38912, 00:31:14.182 "uuid": "76405865-cb45-443e-80fe-ca63d7bc213e", 00:31:14.182 "assigned_rate_limits": { 00:31:14.182 "rw_ios_per_sec": 0, 00:31:14.182 "rw_mbytes_per_sec": 0, 00:31:14.182 "r_mbytes_per_sec": 0, 00:31:14.182 "w_mbytes_per_sec": 0 00:31:14.182 }, 00:31:14.182 "claimed": false, 00:31:14.182 "zoned": false, 00:31:14.182 "supported_io_types": { 00:31:14.182 "read": true, 00:31:14.182 "write": true, 00:31:14.182 "unmap": true, 00:31:14.182 "flush": false, 00:31:14.182 "reset": true, 00:31:14.182 "nvme_admin": false, 00:31:14.182 "nvme_io": false, 00:31:14.182 "nvme_io_md": false, 00:31:14.182 "write_zeroes": true, 00:31:14.182 "zcopy": false, 00:31:14.182 "get_zone_info": false, 00:31:14.182 "zone_management": false, 00:31:14.182 "zone_append": false, 00:31:14.182 "compare": false, 00:31:14.182 "compare_and_write": false, 00:31:14.182 "abort": false, 00:31:14.182 "seek_hole": true, 00:31:14.182 "seek_data": true, 00:31:14.182 "copy": false, 00:31:14.182 "nvme_iov_md": false 00:31:14.182 }, 00:31:14.182 "driver_specific": { 00:31:14.182 "lvol": { 00:31:14.182 "lvol_store_uuid": "86584f36-4e59-4698-95d7-b56f3a9be1a6", 00:31:14.182 "base_bdev": "aio_bdev", 00:31:14.182 "thin_provision": false, 00:31:14.182 "num_allocated_clusters": 38, 00:31:14.182 "snapshot": false, 00:31:14.182 "clone": false, 00:31:14.182 "esnap_clone": false 00:31:14.182 } 00:31:14.182 } 00:31:14.182 } 00:31:14.182 ] 00:31:14.182 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:31:14.182 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 00:31:14.182 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:14.439 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:14.439 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 00:31:14.439 10:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:14.696 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:14.696 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 76405865-cb45-443e-80fe-ca63d7bc213e 00:31:14.953 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86584f36-4e59-4698-95d7-b56f3a9be1a6 00:31:15.516 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:15.516 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:15.773 00:31:15.773 real 0m18.125s 00:31:15.773 user 0m17.790s 00:31:15.773 sys 0m1.815s 00:31:15.773 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:15.773 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:15.773 ************************************ 00:31:15.773 END TEST lvs_grow_clean 00:31:15.773 ************************************ 00:31:15.773 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:15.773 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:15.773 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:15.773 10:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:15.773 ************************************ 00:31:15.773 START TEST lvs_grow_dirty 00:31:15.773 ************************************ 00:31:15.773 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:31:15.773 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:15.774 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:15.774 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:15.774 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:15.774 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:15.774 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:15.774 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:15.774 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:15.774 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:16.031 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:16.031 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:16.288 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fbddee06-0d60-430b-8408-88820a522a01 00:31:16.288 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbddee06-0d60-430b-8408-88820a522a01 00:31:16.288 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:16.545 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:16.545 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:16.545 10:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fbddee06-0d60-430b-8408-88820a522a01 lvol 150 00:31:16.802 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=55084e67-f8ef-4c91-9730-14bc815e97c0 00:31:16.803 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:16.803 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:17.060 [2024-12-09 10:41:49.423735] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:17.060 [2024-12-09 10:41:49.423857] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:17.060 true 00:31:17.060 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbddee06-0d60-430b-8408-88820a522a01 00:31:17.060 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:17.319 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:17.319 10:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:17.577 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 55084e67-f8ef-4c91-9730-14bc815e97c0 00:31:18.142 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:18.142 [2024-12-09 10:41:50.572079] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.400 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:18.658 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2681836 00:31:18.658 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:18.658 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:18.658 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2681836 /var/tmp/bdevperf.sock 00:31:18.658 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2681836 ']' 00:31:18.658 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:18.658 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:18.658 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:18.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:18.658 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:18.658 10:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:18.658 [2024-12-09 10:41:50.922075] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:31:18.658 [2024-12-09 10:41:50.922165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681836 ] 00:31:18.658 [2024-12-09 10:41:50.986739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.658 [2024-12-09 10:41:51.044427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:18.916 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:18.916 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:18.916 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:19.481 Nvme0n1 00:31:19.481 10:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:19.739 [ 00:31:19.739 { 00:31:19.739 "name": "Nvme0n1", 00:31:19.739 "aliases": [ 00:31:19.739 "55084e67-f8ef-4c91-9730-14bc815e97c0" 00:31:19.739 ], 00:31:19.739 "product_name": "NVMe disk", 00:31:19.739 "block_size": 4096, 00:31:19.739 "num_blocks": 38912, 00:31:19.739 "uuid": "55084e67-f8ef-4c91-9730-14bc815e97c0", 00:31:19.739 "numa_id": 0, 00:31:19.739 "assigned_rate_limits": { 00:31:19.739 "rw_ios_per_sec": 0, 00:31:19.739 "rw_mbytes_per_sec": 0, 00:31:19.739 "r_mbytes_per_sec": 0, 00:31:19.739 "w_mbytes_per_sec": 0 00:31:19.739 }, 00:31:19.739 "claimed": false, 00:31:19.739 "zoned": false, 00:31:19.739 "supported_io_types": { 00:31:19.739 "read": true, 00:31:19.739 "write": true, 00:31:19.739 "unmap": true, 00:31:19.739 "flush": true, 00:31:19.739 "reset": true, 00:31:19.739 "nvme_admin": true, 00:31:19.739 "nvme_io": true, 00:31:19.739 "nvme_io_md": false, 00:31:19.739 "write_zeroes": true, 00:31:19.739 "zcopy": false, 00:31:19.739 "get_zone_info": false, 00:31:19.739 "zone_management": false, 00:31:19.739 "zone_append": false, 00:31:19.739 "compare": true, 00:31:19.739 "compare_and_write": true, 00:31:19.739 "abort": true, 00:31:19.739 "seek_hole": false, 00:31:19.739 "seek_data": false, 00:31:19.739 "copy": true, 00:31:19.739 "nvme_iov_md": false 00:31:19.739 }, 00:31:19.739 "memory_domains": [ 00:31:19.739 { 00:31:19.739 "dma_device_id": "system", 00:31:19.739 "dma_device_type": 1 00:31:19.739 } 00:31:19.739 ], 00:31:19.739 "driver_specific": { 00:31:19.739 "nvme": [ 00:31:19.739 { 00:31:19.739 "trid": { 00:31:19.739 "trtype": "TCP", 00:31:19.739 "adrfam": "IPv4", 00:31:19.739 "traddr": "10.0.0.2", 00:31:19.739 "trsvcid": "4420", 00:31:19.739 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:19.739 }, 00:31:19.739 "ctrlr_data": { 00:31:19.739 "cntlid": 1, 00:31:19.739 "vendor_id": "0x8086", 00:31:19.740 "model_number": "SPDK bdev Controller", 00:31:19.740 "serial_number": "SPDK0", 00:31:19.740 "firmware_revision": "25.01", 00:31:19.740 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:19.740 "oacs": { 00:31:19.740 "security": 0, 00:31:19.740 "format": 0, 00:31:19.740 "firmware": 0, 00:31:19.740 "ns_manage": 0 00:31:19.740 }, 00:31:19.740 "multi_ctrlr": true, 00:31:19.740 "ana_reporting": false 00:31:19.740 }, 00:31:19.740 "vs": { 00:31:19.740 "nvme_version": "1.3" 00:31:19.740 }, 00:31:19.740 "ns_data": { 00:31:19.740 "id": 1, 00:31:19.740 "can_share": true 00:31:19.740 } 00:31:19.740 } 00:31:19.740 ], 00:31:19.740 "mp_policy": "active_passive" 00:31:19.740 } 00:31:19.740 } 00:31:19.740 ] 00:31:19.740 10:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2681971 00:31:19.740 10:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:19.740 10:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:19.740 Running I/O for 10 seconds... 00:31:21.112 Latency(us) 00:31:21.112 [2024-12-09T09:41:53.553Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:21.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:21.112 Nvme0n1 : 1.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:31:21.112 [2024-12-09T09:41:53.553Z] =================================================================================================================== 00:31:21.112 [2024-12-09T09:41:53.553Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:31:21.112 00:31:21.677 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fbddee06-0d60-430b-8408-88820a522a01 00:31:21.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:21.677 Nvme0n1 : 2.00 15176.50 59.28 0.00 0.00 0.00 0.00 0.00 00:31:21.677 [2024-12-09T09:41:54.118Z] =================================================================================================================== 00:31:21.677 [2024-12-09T09:41:54.118Z] Total : 15176.50 59.28 0.00 0.00 0.00 0.00 0.00 00:31:21.677 00:31:21.934 true 00:31:21.934 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbddee06-0d60-430b-8408-88820a522a01 00:31:21.934 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:22.192 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:22.192 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:22.192 10:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2681971 00:31:22.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:22.757 Nvme0n1 : 3.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:31:22.757 [2024-12-09T09:41:55.198Z] =================================================================================================================== 00:31:22.757 [2024-12-09T09:41:55.198Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:31:22.757 00:31:23.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:23.690 Nvme0n1 : 4.00 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:31:23.690 [2024-12-09T09:41:56.131Z] =================================================================================================================== 00:31:23.690 [2024-12-09T09:41:56.131Z] Total : 15271.75 59.66 0.00 0.00 0.00 0.00 0.00 00:31:23.690 00:31:25.066 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:25.066 Nvme0n1 : 5.00 15265.40 59.63 0.00 0.00 0.00 0.00 0.00 00:31:25.066 [2024-12-09T09:41:57.507Z] =================================================================================================================== 00:31:25.066 [2024-12-09T09:41:57.507Z] Total : 15265.40 59.63 0.00 0.00 0.00 0.00 0.00 00:31:25.066 00:31:26.000 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:26.000 Nvme0n1 : 6.00 15324.67 59.86 0.00 0.00 0.00 0.00 0.00 00:31:26.000 [2024-12-09T09:41:58.441Z] =================================================================================================================== 00:31:26.000 [2024-12-09T09:41:58.441Z] Total : 15324.67 59.86 0.00 0.00 0.00 0.00 0.00 00:31:26.000 00:31:26.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:26.933 Nvme0n1 : 7.00 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:31:26.933 [2024-12-09T09:41:59.374Z] =================================================================================================================== 00:31:26.933 [2024-12-09T09:41:59.374Z] Total : 15367.00 60.03 0.00 0.00 0.00 0.00 0.00 00:31:26.933 00:31:27.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:27.867 Nvme0n1 : 8.00 15398.75 60.15 0.00 0.00 0.00 0.00 0.00 00:31:27.867 [2024-12-09T09:42:00.308Z] =================================================================================================================== 00:31:27.867 [2024-12-09T09:42:00.308Z] Total : 15398.75 60.15 0.00 0.00 0.00 0.00 0.00 00:31:27.867 00:31:28.801 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:28.801 Nvme0n1 : 9.00 15409.33 60.19 0.00 0.00 0.00 0.00 0.00 00:31:28.801 [2024-12-09T09:42:01.242Z] =================================================================================================================== 00:31:28.801 [2024-12-09T09:42:01.242Z] Total : 15409.33 60.19 0.00 0.00 0.00 0.00 0.00 00:31:28.801 00:31:29.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:29.736 Nvme0n1 : 10.00 15417.80 60.23 0.00 0.00 0.00 0.00 0.00 00:31:29.736 [2024-12-09T09:42:02.177Z] =================================================================================================================== 00:31:29.736 [2024-12-09T09:42:02.177Z] Total : 15417.80 60.23 0.00 0.00 0.00 0.00 0.00 00:31:29.736 00:31:29.736 00:31:29.736 Latency(us) 00:31:29.736 [2024-12-09T09:42:02.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:29.736 Nvme0n1 : 10.01 15419.58 60.23 0.00 0.00 8296.38 7184.69 17767.54 00:31:29.736 [2024-12-09T09:42:02.177Z] =================================================================================================================== 00:31:29.736 [2024-12-09T09:42:02.177Z] Total : 15419.58 60.23 0.00 0.00 8296.38 7184.69 17767.54 00:31:29.736 { 00:31:29.736 "results": [ 00:31:29.736 { 00:31:29.736 "job": "Nvme0n1", 00:31:29.736 "core_mask": "0x2", 00:31:29.736 "workload": "randwrite", 00:31:29.736 "status": "finished", 00:31:29.736 "queue_depth": 128, 00:31:29.736 "io_size": 4096, 00:31:29.736 "runtime": 10.00715, 00:31:29.736 "iops": 15419.57500387223, 00:31:29.736 "mibps": 60.2327148588759, 00:31:29.736 "io_failed": 0, 00:31:29.736 "io_timeout": 0, 00:31:29.736 "avg_latency_us": 8296.375482943704, 00:31:29.736 "min_latency_us": 7184.687407407408, 00:31:29.736 "max_latency_us": 17767.53777777778 00:31:29.736 } 00:31:29.736 ], 00:31:29.736 "core_count": 1 00:31:29.736 } 00:31:29.736 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2681836 00:31:29.736 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2681836 ']' 00:31:29.736 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2681836 00:31:29.736 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:31:29.736 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.736 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2681836 00:31:29.994 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:29.994 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:29.994 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2681836' 00:31:29.994 killing process with pid 2681836 00:31:29.994 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2681836 00:31:29.994 Received shutdown signal, test time was about 10.000000 seconds 00:31:29.994 00:31:29.994 Latency(us) 00:31:29.994 [2024-12-09T09:42:02.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.994 [2024-12-09T09:42:02.435Z] =================================================================================================================== 00:31:29.994 [2024-12-09T09:42:02.435Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:29.994 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2681836 00:31:30.252 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:30.510 10:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.767 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbddee06-0d60-430b-8408-88820a522a01 00:31:30.767 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2679220 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2679220 00:31:31.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2679220 Killed "${NVMF_APP[@]}" "$@" 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2683352 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2683352 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2683352 ']' 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.026 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:31.026 [2024-12-09 10:42:03.384492] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:31.026 [2024-12-09 10:42:03.385564] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:31:31.026 [2024-12-09 10:42:03.385638] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:31.026 [2024-12-09 10:42:03.461196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.285 [2024-12-09 10:42:03.520007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:31.285 [2024-12-09 10:42:03.520073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:31.285 [2024-12-09 10:42:03.520101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:31.285 [2024-12-09 10:42:03.520112] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:31.285 [2024-12-09 10:42:03.520122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:31.285 [2024-12-09 10:42:03.520755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.285 [2024-12-09 10:42:03.615463] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:31.285 [2024-12-09 10:42:03.615760] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:31.285 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.285 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:31:31.285 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:31.285 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:31.285 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:31.285 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:31.285 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:31.543 [2024-12-09 10:42:03.915510] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:31:31.543 [2024-12-09 10:42:03.915650] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:31:31.543 [2024-12-09 10:42:03.915700] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:31:31.543 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:31:31.543 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 55084e67-f8ef-4c91-9730-14bc815e97c0 00:31:31.543 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=55084e67-f8ef-4c91-9730-14bc815e97c0 00:31:31.543 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:31.543 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:31.543 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:31.543 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:31.543 10:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:31.800 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 55084e67-f8ef-4c91-9730-14bc815e97c0 -t 2000 00:31:32.059 [ 00:31:32.059 { 00:31:32.059 "name": "55084e67-f8ef-4c91-9730-14bc815e97c0", 00:31:32.059 "aliases": [ 00:31:32.059 "lvs/lvol" 00:31:32.059 ], 00:31:32.059 "product_name": "Logical Volume", 00:31:32.059 "block_size": 4096, 00:31:32.059 "num_blocks": 38912, 00:31:32.059 "uuid": "55084e67-f8ef-4c91-9730-14bc815e97c0", 00:31:32.059 "assigned_rate_limits": { 00:31:32.059 "rw_ios_per_sec": 0, 00:31:32.059 "rw_mbytes_per_sec": 0, 00:31:32.059 "r_mbytes_per_sec": 0, 00:31:32.059 "w_mbytes_per_sec": 0 00:31:32.059 }, 00:31:32.059 "claimed": false, 00:31:32.059 "zoned": false, 00:31:32.059 "supported_io_types": { 00:31:32.059 "read": true, 00:31:32.059 "write": true, 00:31:32.059 "unmap": true, 00:31:32.059 "flush": false, 00:31:32.059 "reset": true, 00:31:32.059 "nvme_admin": false, 00:31:32.059 "nvme_io": false, 00:31:32.059 "nvme_io_md": false, 00:31:32.059 "write_zeroes": true, 00:31:32.059 "zcopy": false, 00:31:32.059 "get_zone_info": false, 00:31:32.059 "zone_management": false, 00:31:32.059 "zone_append": false, 00:31:32.059 "compare": false, 00:31:32.059 "compare_and_write": false, 00:31:32.059 "abort": false, 00:31:32.059 "seek_hole": true, 00:31:32.059 "seek_data": true, 00:31:32.059 "copy": false, 00:31:32.059 "nvme_iov_md": false 00:31:32.059 }, 00:31:32.059 "driver_specific": { 00:31:32.059 "lvol": { 00:31:32.059 "lvol_store_uuid": "fbddee06-0d60-430b-8408-88820a522a01", 00:31:32.059 "base_bdev": "aio_bdev", 00:31:32.059 "thin_provision": false, 00:31:32.059 "num_allocated_clusters": 38, 00:31:32.059 "snapshot": false, 00:31:32.059 "clone": false, 00:31:32.059 "esnap_clone": false 00:31:32.059 } 00:31:32.059 } 00:31:32.059 } 00:31:32.059 ] 00:31:32.059 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:32.059 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbddee06-0d60-430b-8408-88820a522a01 00:31:32.059 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:31:32.624 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:31:32.624 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbddee06-0d60-430b-8408-88820a522a01 00:31:32.624 10:42:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:31:32.624 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:31:32.624 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:32.881 [2024-12-09 10:42:05.285312] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbddee06-0d60-430b-8408-88820a522a01 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbddee06-0d60-430b-8408-88820a522a01 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:32.881 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbddee06-0d60-430b-8408-88820a522a01 00:31:33.138 request: 00:31:33.138 { 00:31:33.138 "uuid": "fbddee06-0d60-430b-8408-88820a522a01", 00:31:33.138 "method": "bdev_lvol_get_lvstores", 00:31:33.138 "req_id": 1 00:31:33.138 } 00:31:33.138 Got JSON-RPC error response 00:31:33.138 response: 00:31:33.138 { 00:31:33.138 "code": -19, 00:31:33.138 "message": "No such device" 00:31:33.138 } 00:31:33.395 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:31:33.395 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:33.395 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:33.395 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:33.395 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:33.653 aio_bdev 00:31:33.653 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 55084e67-f8ef-4c91-9730-14bc815e97c0 00:31:33.653 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=55084e67-f8ef-4c91-9730-14bc815e97c0 00:31:33.653 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:31:33.653 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:31:33.653 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:31:33.653 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:31:33.653 10:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:33.910 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 55084e67-f8ef-4c91-9730-14bc815e97c0 -t 2000 00:31:34.168 [ 00:31:34.168 { 00:31:34.168 "name": "55084e67-f8ef-4c91-9730-14bc815e97c0", 00:31:34.168 "aliases": [ 00:31:34.168 "lvs/lvol" 00:31:34.168 ], 00:31:34.168 "product_name": "Logical Volume", 00:31:34.168 "block_size": 4096, 00:31:34.168 "num_blocks": 38912, 00:31:34.168 "uuid": "55084e67-f8ef-4c91-9730-14bc815e97c0", 00:31:34.168 "assigned_rate_limits": { 00:31:34.168 "rw_ios_per_sec": 0, 00:31:34.168 "rw_mbytes_per_sec": 0, 00:31:34.168 "r_mbytes_per_sec": 0, 00:31:34.168 "w_mbytes_per_sec": 0 00:31:34.168 }, 00:31:34.168 "claimed": false, 00:31:34.168 "zoned": false, 00:31:34.168 "supported_io_types": { 00:31:34.168 "read": true, 00:31:34.168 "write": true, 00:31:34.168 "unmap": true, 00:31:34.168 "flush": false, 00:31:34.168 "reset": true, 00:31:34.168 "nvme_admin": false, 00:31:34.168 "nvme_io": false, 00:31:34.168 "nvme_io_md": false, 00:31:34.168 "write_zeroes": true, 00:31:34.168 "zcopy": false, 00:31:34.168 "get_zone_info": false, 00:31:34.168 "zone_management": false, 00:31:34.168 "zone_append": false, 00:31:34.168 "compare": false, 00:31:34.168 "compare_and_write": false, 00:31:34.168 "abort": false, 00:31:34.168 "seek_hole": true, 00:31:34.168 "seek_data": true, 00:31:34.168 "copy": false, 00:31:34.168 "nvme_iov_md": false 00:31:34.168 }, 00:31:34.168 "driver_specific": { 00:31:34.168 "lvol": { 00:31:34.168 "lvol_store_uuid": "fbddee06-0d60-430b-8408-88820a522a01", 00:31:34.168 "base_bdev": "aio_bdev", 00:31:34.168 "thin_provision": false, 00:31:34.168 "num_allocated_clusters": 38, 00:31:34.168 "snapshot": false, 00:31:34.168 "clone": false, 00:31:34.168 "esnap_clone": false 00:31:34.168 } 00:31:34.168 } 00:31:34.168 } 00:31:34.168 ] 00:31:34.168 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:31:34.168 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbddee06-0d60-430b-8408-88820a522a01 00:31:34.169 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:34.426 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:34.426 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fbddee06-0d60-430b-8408-88820a522a01 00:31:34.426 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:34.684 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:34.684 10:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 55084e67-f8ef-4c91-9730-14bc815e97c0 00:31:34.941 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fbddee06-0d60-430b-8408-88820a522a01 00:31:35.199 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:35.456 00:31:35.456 real 0m19.809s 00:31:35.456 user 0m37.004s 00:31:35.456 sys 0m4.573s 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:31:35.456 ************************************ 00:31:35.456 END TEST lvs_grow_dirty 00:31:35.456 ************************************ 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:31:35.456 nvmf_trace.0 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.456 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.456 rmmod nvme_tcp 00:31:35.714 rmmod nvme_fabrics 00:31:35.714 rmmod nvme_keyring 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2683352 ']' 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2683352 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2683352 ']' 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2683352 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2683352 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2683352' 00:31:35.714 killing process with pid 2683352 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2683352 00:31:35.714 10:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2683352 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.971 10:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.872 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.872 00:31:37.872 real 0m43.692s 00:31:37.872 user 0m56.701s 00:31:37.872 sys 0m8.485s 00:31:37.872 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:37.872 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:37.872 ************************************ 00:31:37.872 END TEST nvmf_lvs_grow 00:31:37.872 ************************************ 00:31:37.872 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:37.872 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:37.872 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:37.872 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:38.131 ************************************ 00:31:38.131 START TEST nvmf_bdev_io_wait 00:31:38.131 ************************************ 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:31:38.131 * Looking for test storage... 00:31:38.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.131 --rc genhtml_branch_coverage=1 00:31:38.131 --rc genhtml_function_coverage=1 00:31:38.131 --rc genhtml_legend=1 00:31:38.131 --rc geninfo_all_blocks=1 00:31:38.131 --rc geninfo_unexecuted_blocks=1 00:31:38.131 00:31:38.131 ' 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.131 --rc genhtml_branch_coverage=1 00:31:38.131 --rc genhtml_function_coverage=1 00:31:38.131 --rc genhtml_legend=1 00:31:38.131 --rc geninfo_all_blocks=1 00:31:38.131 --rc geninfo_unexecuted_blocks=1 00:31:38.131 00:31:38.131 ' 00:31:38.131 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.131 --rc genhtml_branch_coverage=1 00:31:38.132 --rc genhtml_function_coverage=1 00:31:38.132 --rc genhtml_legend=1 00:31:38.132 --rc geninfo_all_blocks=1 00:31:38.132 --rc geninfo_unexecuted_blocks=1 00:31:38.132 00:31:38.132 ' 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:38.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.132 --rc genhtml_branch_coverage=1 00:31:38.132 --rc genhtml_function_coverage=1 00:31:38.132 --rc genhtml_legend=1 00:31:38.132 --rc geninfo_all_blocks=1 00:31:38.132 --rc geninfo_unexecuted_blocks=1 00:31:38.132 00:31:38.132 ' 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:31:38.132 10:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:40.665 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:40.666 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:40.666 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:40.666 Found net devices under 0000:09:00.0: cvl_0_0 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:40.666 Found net devices under 0000:09:00.1: cvl_0_1 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:40.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:31:40.666 00:31:40.666 --- 10.0.0.2 ping statistics --- 00:31:40.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.666 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:31:40.666 00:31:40.666 --- 10.0.0.1 ping statistics --- 00:31:40.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.666 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.666 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2686430 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2686430 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2686430 ']' 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.667 10:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.667 [2024-12-09 10:42:12.837649] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:40.667 [2024-12-09 10:42:12.838693] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:31:40.667 [2024-12-09 10:42:12.838741] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.667 [2024-12-09 10:42:12.906298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:40.667 [2024-12-09 10:42:12.963967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.667 [2024-12-09 10:42:12.964023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.667 [2024-12-09 10:42:12.964044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.667 [2024-12-09 10:42:12.964055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.667 [2024-12-09 10:42:12.964067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.667 [2024-12-09 10:42:12.965691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.667 [2024-12-09 10:42:12.965748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.667 [2024-12-09 10:42:12.965813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.667 [2024-12-09 10:42:12.965816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.667 [2024-12-09 10:42:12.966281] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.667 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.925 [2024-12-09 10:42:13.163215] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:40.925 [2024-12-09 10:42:13.163370] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:40.925 [2024-12-09 10:42:13.164286] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:40.925 [2024-12-09 10:42:13.165068] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:40.925 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.925 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:40.925 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.925 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.925 [2024-12-09 10:42:13.170531] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.925 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.926 Malloc0 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:40.926 [2024-12-09 10:42:13.230715] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2686458 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2686460 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2686462 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.926 { 00:31:40.926 "params": { 00:31:40.926 "name": "Nvme$subsystem", 00:31:40.926 "trtype": "$TEST_TRANSPORT", 00:31:40.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.926 "adrfam": "ipv4", 00:31:40.926 "trsvcid": "$NVMF_PORT", 00:31:40.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.926 "hdgst": ${hdgst:-false}, 00:31:40.926 "ddgst": ${ddgst:-false} 00:31:40.926 }, 00:31:40.926 "method": "bdev_nvme_attach_controller" 00:31:40.926 } 00:31:40.926 EOF 00:31:40.926 )") 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2686464 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.926 { 00:31:40.926 "params": { 00:31:40.926 "name": "Nvme$subsystem", 00:31:40.926 "trtype": "$TEST_TRANSPORT", 00:31:40.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.926 "adrfam": "ipv4", 00:31:40.926 "trsvcid": "$NVMF_PORT", 00:31:40.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.926 "hdgst": ${hdgst:-false}, 00:31:40.926 "ddgst": ${ddgst:-false} 00:31:40.926 }, 00:31:40.926 "method": "bdev_nvme_attach_controller" 00:31:40.926 } 00:31:40.926 EOF 00:31:40.926 )") 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.926 { 00:31:40.926 "params": { 00:31:40.926 "name": "Nvme$subsystem", 00:31:40.926 "trtype": "$TEST_TRANSPORT", 00:31:40.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.926 "adrfam": "ipv4", 00:31:40.926 "trsvcid": "$NVMF_PORT", 00:31:40.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.926 "hdgst": ${hdgst:-false}, 00:31:40.926 "ddgst": ${ddgst:-false} 00:31:40.926 }, 00:31:40.926 "method": "bdev_nvme_attach_controller" 00:31:40.926 } 00:31:40.926 EOF 00:31:40.926 )") 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:40.926 { 00:31:40.926 "params": { 00:31:40.926 "name": "Nvme$subsystem", 00:31:40.926 "trtype": "$TEST_TRANSPORT", 00:31:40.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.926 "adrfam": "ipv4", 00:31:40.926 "trsvcid": "$NVMF_PORT", 00:31:40.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.926 "hdgst": ${hdgst:-false}, 00:31:40.926 "ddgst": ${ddgst:-false} 00:31:40.926 }, 00:31:40.926 "method": "bdev_nvme_attach_controller" 00:31:40.926 } 00:31:40.926 EOF 00:31:40.926 )") 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2686458 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.926 "params": { 00:31:40.926 "name": "Nvme1", 00:31:40.926 "trtype": "tcp", 00:31:40.926 "traddr": "10.0.0.2", 00:31:40.926 "adrfam": "ipv4", 00:31:40.926 "trsvcid": "4420", 00:31:40.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.926 "hdgst": false, 00:31:40.926 "ddgst": false 00:31:40.926 }, 00:31:40.926 "method": "bdev_nvme_attach_controller" 00:31:40.926 }' 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.926 "params": { 00:31:40.926 "name": "Nvme1", 00:31:40.926 "trtype": "tcp", 00:31:40.926 "traddr": "10.0.0.2", 00:31:40.926 "adrfam": "ipv4", 00:31:40.926 "trsvcid": "4420", 00:31:40.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.926 "hdgst": false, 00:31:40.926 "ddgst": false 00:31:40.926 }, 00:31:40.926 "method": "bdev_nvme_attach_controller" 00:31:40.926 }' 00:31:40.926 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:40.927 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.927 "params": { 00:31:40.927 "name": "Nvme1", 00:31:40.927 "trtype": "tcp", 00:31:40.927 "traddr": "10.0.0.2", 00:31:40.927 "adrfam": "ipv4", 00:31:40.927 "trsvcid": "4420", 00:31:40.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.927 "hdgst": false, 00:31:40.927 "ddgst": false 00:31:40.927 }, 00:31:40.927 "method": "bdev_nvme_attach_controller" 00:31:40.927 }' 00:31:40.927 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:40.927 10:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:40.927 "params": { 00:31:40.927 "name": "Nvme1", 00:31:40.927 "trtype": "tcp", 00:31:40.927 "traddr": "10.0.0.2", 00:31:40.927 "adrfam": "ipv4", 00:31:40.927 "trsvcid": "4420", 00:31:40.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:40.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:40.927 "hdgst": false, 00:31:40.927 "ddgst": false 00:31:40.927 }, 00:31:40.927 "method": "bdev_nvme_attach_controller" 00:31:40.927 }' 00:31:40.927 [2024-12-09 10:42:13.282593] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:31:40.927 [2024-12-09 10:42:13.282593] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:31:40.927 [2024-12-09 10:42:13.282593] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:31:40.927 [2024-12-09 10:42:13.282695] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 10:42:13.282696] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 10:42:13.282696] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:40.927 --proc-type=auto ] 00:31:40.927 --proc-type=auto ] 00:31:40.927 [2024-12-09 10:42:13.283556] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:31:40.927 [2024-12-09 10:42:13.283624] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:41.184 [2024-12-09 10:42:13.468591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.184 [2024-12-09 10:42:13.525151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:41.184 [2024-12-09 10:42:13.574206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.442 [2024-12-09 10:42:13.632414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:41.442 [2024-12-09 10:42:13.684294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.442 [2024-12-09 10:42:13.742183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:41.442 [2024-12-09 10:42:13.760395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.442 [2024-12-09 10:42:13.811630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:31:41.700 Running I/O for 1 seconds... 00:31:41.700 Running I/O for 1 seconds... 00:31:41.700 Running I/O for 1 seconds... 00:31:41.700 Running I/O for 1 seconds... 00:31:42.634 190232.00 IOPS, 743.09 MiB/s 00:31:42.634 Latency(us) 00:31:42.634 [2024-12-09T09:42:15.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.634 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:42.634 Nvme1n1 : 1.00 189874.11 741.70 0.00 0.00 670.31 283.69 1844.72 00:31:42.634 [2024-12-09T09:42:15.075Z] =================================================================================================================== 00:31:42.634 [2024-12-09T09:42:15.075Z] Total : 189874.11 741.70 0.00 0.00 670.31 283.69 1844.72 00:31:42.634 6603.00 IOPS, 25.79 MiB/s 00:31:42.634 Latency(us) 00:31:42.634 [2024-12-09T09:42:15.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.634 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:42.634 Nvme1n1 : 1.02 6605.91 25.80 0.00 0.00 19212.98 3883.61 28156.21 00:31:42.634 [2024-12-09T09:42:15.075Z] =================================================================================================================== 00:31:42.634 [2024-12-09T09:42:15.075Z] Total : 6605.91 25.80 0.00 0.00 19212.98 3883.61 28156.21 00:31:42.634 8155.00 IOPS, 31.86 MiB/s 00:31:42.634 Latency(us) 00:31:42.634 [2024-12-09T09:42:15.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.634 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:42.634 Nvme1n1 : 1.01 8216.54 32.10 0.00 0.00 15503.91 6456.51 23107.51 00:31:42.634 [2024-12-09T09:42:15.075Z] =================================================================================================================== 00:31:42.634 [2024-12-09T09:42:15.075Z] Total : 8216.54 32.10 0.00 0.00 15503.91 6456.51 23107.51 00:31:42.891 6641.00 IOPS, 25.94 MiB/s 00:31:42.891 Latency(us) 00:31:42.891 [2024-12-09T09:42:15.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.891 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:42.891 Nvme1n1 : 1.01 6771.59 26.45 0.00 0.00 18851.11 3810.80 36894.34 00:31:42.891 [2024-12-09T09:42:15.332Z] =================================================================================================================== 00:31:42.891 [2024-12-09T09:42:15.332Z] Total : 6771.59 26.45 0.00 0.00 18851.11 3810.80 36894.34 00:31:42.891 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2686460 00:31:42.892 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2686462 00:31:42.892 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2686464 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:43.149 rmmod nvme_tcp 00:31:43.149 rmmod nvme_fabrics 00:31:43.149 rmmod nvme_keyring 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2686430 ']' 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2686430 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2686430 ']' 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2686430 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2686430 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2686430' 00:31:43.149 killing process with pid 2686430 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2686430 00:31:43.149 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2686430 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:43.407 10:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.309 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:45.309 00:31:45.309 real 0m7.407s 00:31:45.309 user 0m14.891s 00:31:45.309 sys 0m4.193s 00:31:45.309 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:45.309 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:45.309 ************************************ 00:31:45.309 END TEST nvmf_bdev_io_wait 00:31:45.309 ************************************ 00:31:45.567 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:45.567 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:45.567 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:45.567 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:45.567 ************************************ 00:31:45.567 START TEST nvmf_queue_depth 00:31:45.568 ************************************ 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:45.568 * Looking for test storage... 00:31:45.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:45.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.568 --rc genhtml_branch_coverage=1 00:31:45.568 --rc genhtml_function_coverage=1 00:31:45.568 --rc genhtml_legend=1 00:31:45.568 --rc geninfo_all_blocks=1 00:31:45.568 --rc geninfo_unexecuted_blocks=1 00:31:45.568 00:31:45.568 ' 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:45.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.568 --rc genhtml_branch_coverage=1 00:31:45.568 --rc genhtml_function_coverage=1 00:31:45.568 --rc genhtml_legend=1 00:31:45.568 --rc geninfo_all_blocks=1 00:31:45.568 --rc geninfo_unexecuted_blocks=1 00:31:45.568 00:31:45.568 ' 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:45.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.568 --rc genhtml_branch_coverage=1 00:31:45.568 --rc genhtml_function_coverage=1 00:31:45.568 --rc genhtml_legend=1 00:31:45.568 --rc geninfo_all_blocks=1 00:31:45.568 --rc geninfo_unexecuted_blocks=1 00:31:45.568 00:31:45.568 ' 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:45.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:45.568 --rc genhtml_branch_coverage=1 00:31:45.568 --rc genhtml_function_coverage=1 00:31:45.568 --rc genhtml_legend=1 00:31:45.568 --rc geninfo_all_blocks=1 00:31:45.568 --rc geninfo_unexecuted_blocks=1 00:31:45.568 00:31:45.568 ' 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.568 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:45.569 10:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:48.099 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.099 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:48.100 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:48.100 Found net devices under 0000:09:00.0: cvl_0_0 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:48.100 Found net devices under 0000:09:00.1: cvl_0_1 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:48.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:31:48.100 00:31:48.100 --- 10.0.0.2 ping statistics --- 00:31:48.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.100 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:31:48.100 00:31:48.100 --- 10.0.0.1 ping statistics --- 00:31:48.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.100 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:48.100 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2688685 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2688685 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2688685 ']' 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.101 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.101 [2024-12-09 10:42:20.282317] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:48.101 [2024-12-09 10:42:20.283454] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:31:48.101 [2024-12-09 10:42:20.283530] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.101 [2024-12-09 10:42:20.359500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.101 [2024-12-09 10:42:20.416782] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.101 [2024-12-09 10:42:20.416838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.101 [2024-12-09 10:42:20.416866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.101 [2024-12-09 10:42:20.416877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.101 [2024-12-09 10:42:20.416887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.101 [2024-12-09 10:42:20.417544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.101 [2024-12-09 10:42:20.513863] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:48.101 [2024-12-09 10:42:20.514213] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:48.360 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.360 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.361 [2024-12-09 10:42:20.570160] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.361 Malloc0 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.361 [2024-12-09 10:42:20.638304] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2688828 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2688828 /var/tmp/bdevperf.sock 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2688828 ']' 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:48.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.361 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.361 [2024-12-09 10:42:20.690371] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:31:48.361 [2024-12-09 10:42:20.690474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2688828 ] 00:31:48.361 [2024-12-09 10:42:20.762055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.661 [2024-12-09 10:42:20.826527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.661 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.661 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:48.661 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:48.661 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.661 10:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:48.977 NVMe0n1 00:31:48.977 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.977 10:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:48.977 Running I/O for 10 seconds... 00:31:50.873 8349.00 IOPS, 32.61 MiB/s [2024-12-09T09:42:24.248Z] 8701.50 IOPS, 33.99 MiB/s [2024-12-09T09:42:25.620Z] 8640.33 IOPS, 33.75 MiB/s [2024-12-09T09:42:26.551Z] 8697.25 IOPS, 33.97 MiB/s [2024-12-09T09:42:27.484Z] 8717.00 IOPS, 34.05 MiB/s [2024-12-09T09:42:28.417Z] 8704.00 IOPS, 34.00 MiB/s [2024-12-09T09:42:29.351Z] 8729.29 IOPS, 34.10 MiB/s [2024-12-09T09:42:30.286Z] 8706.62 IOPS, 34.01 MiB/s [2024-12-09T09:42:31.659Z] 8745.22 IOPS, 34.16 MiB/s [2024-12-09T09:42:31.659Z] 8725.30 IOPS, 34.08 MiB/s 00:31:59.218 Latency(us) 00:31:59.218 [2024-12-09T09:42:31.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.218 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:59.218 Verification LBA range: start 0x0 length 0x4000 00:31:59.218 NVMe0n1 : 10.07 8762.21 34.23 0.00 0.00 116352.81 14563.56 68739.98 00:31:59.218 [2024-12-09T09:42:31.659Z] =================================================================================================================== 00:31:59.218 [2024-12-09T09:42:31.659Z] Total : 8762.21 34.23 0.00 0.00 116352.81 14563.56 68739.98 00:31:59.218 { 00:31:59.218 "results": [ 00:31:59.218 { 00:31:59.218 "job": "NVMe0n1", 00:31:59.218 "core_mask": "0x1", 00:31:59.218 "workload": "verify", 00:31:59.218 "status": "finished", 00:31:59.218 "verify_range": { 00:31:59.218 "start": 0, 00:31:59.218 "length": 16384 00:31:59.218 }, 00:31:59.218 "queue_depth": 1024, 00:31:59.218 "io_size": 4096, 00:31:59.218 "runtime": 10.068347, 00:31:59.218 "iops": 8762.212903468662, 00:31:59.218 "mibps": 34.22739415417446, 00:31:59.218 "io_failed": 0, 00:31:59.218 "io_timeout": 0, 00:31:59.218 "avg_latency_us": 116352.81437286074, 00:31:59.218 "min_latency_us": 14563.555555555555, 00:31:59.218 "max_latency_us": 68739.98222222223 00:31:59.218 } 00:31:59.218 ], 00:31:59.218 "core_count": 1 00:31:59.218 } 00:31:59.218 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2688828 00:31:59.218 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2688828 ']' 00:31:59.218 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2688828 00:31:59.218 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:59.218 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:59.218 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2688828 00:31:59.218 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:59.218 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:59.218 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2688828' 00:31:59.218 killing process with pid 2688828 00:31:59.218 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2688828 00:31:59.218 Received shutdown signal, test time was about 10.000000 seconds 00:31:59.218 00:31:59.218 Latency(us) 00:31:59.218 [2024-12-09T09:42:31.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.218 [2024-12-09T09:42:31.659Z] =================================================================================================================== 00:31:59.218 [2024-12-09T09:42:31.660Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2688828 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:59.219 rmmod nvme_tcp 00:31:59.219 rmmod nvme_fabrics 00:31:59.219 rmmod nvme_keyring 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2688685 ']' 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2688685 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2688685 ']' 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2688685 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:59.219 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2688685 00:31:59.477 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:59.477 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:59.477 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2688685' 00:31:59.477 killing process with pid 2688685 00:31:59.477 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2688685 00:31:59.477 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2688685 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:59.737 10:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.642 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:01.642 00:32:01.642 real 0m16.242s 00:32:01.642 user 0m22.430s 00:32:01.642 sys 0m3.345s 00:32:01.642 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:01.642 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:01.642 ************************************ 00:32:01.642 END TEST nvmf_queue_depth 00:32:01.642 ************************************ 00:32:01.643 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:01.643 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:01.643 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:01.643 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:01.902 ************************************ 00:32:01.902 START TEST nvmf_target_multipath 00:32:01.902 ************************************ 00:32:01.902 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:01.902 * Looking for test storage... 00:32:01.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:01.902 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:01.902 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:32:01.902 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:01.902 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:01.902 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:01.902 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:01.902 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:01.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.903 --rc genhtml_branch_coverage=1 00:32:01.903 --rc genhtml_function_coverage=1 00:32:01.903 --rc genhtml_legend=1 00:32:01.903 --rc geninfo_all_blocks=1 00:32:01.903 --rc geninfo_unexecuted_blocks=1 00:32:01.903 00:32:01.903 ' 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:01.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.903 --rc genhtml_branch_coverage=1 00:32:01.903 --rc genhtml_function_coverage=1 00:32:01.903 --rc genhtml_legend=1 00:32:01.903 --rc geninfo_all_blocks=1 00:32:01.903 --rc geninfo_unexecuted_blocks=1 00:32:01.903 00:32:01.903 ' 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:01.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.903 --rc genhtml_branch_coverage=1 00:32:01.903 --rc genhtml_function_coverage=1 00:32:01.903 --rc genhtml_legend=1 00:32:01.903 --rc geninfo_all_blocks=1 00:32:01.903 --rc geninfo_unexecuted_blocks=1 00:32:01.903 00:32:01.903 ' 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:01.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:01.903 --rc genhtml_branch_coverage=1 00:32:01.903 --rc genhtml_function_coverage=1 00:32:01.903 --rc genhtml_legend=1 00:32:01.903 --rc geninfo_all_blocks=1 00:32:01.903 --rc geninfo_unexecuted_blocks=1 00:32:01.903 00:32:01.903 ' 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:01.903 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:01.904 10:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.439 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:04.440 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:04.440 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:04.440 Found net devices under 0000:09:00.0: cvl_0_0 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:04.440 Found net devices under 0000:09:00.1: cvl_0_1 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:04.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:32:04.440 00:32:04.440 --- 10.0.0.2 ping statistics --- 00:32:04.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.440 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:04.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:32:04.440 00:32:04.440 --- 10.0.0.1 ping statistics --- 00:32:04.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.440 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:04.440 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:04.441 only one NIC for nvmf test 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:04.441 rmmod nvme_tcp 00:32:04.441 rmmod nvme_fabrics 00:32:04.441 rmmod nvme_keyring 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:04.441 10:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:06.346 00:32:06.346 real 0m4.639s 00:32:06.346 user 0m0.950s 00:32:06.346 sys 0m1.691s 00:32:06.346 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:06.347 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:06.347 ************************************ 00:32:06.347 END TEST nvmf_target_multipath 00:32:06.347 ************************************ 00:32:06.347 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:06.347 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:06.347 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:06.347 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:06.347 ************************************ 00:32:06.347 START TEST nvmf_zcopy 00:32:06.347 ************************************ 00:32:06.347 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:06.607 * Looking for test storage... 00:32:06.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:06.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.607 --rc genhtml_branch_coverage=1 00:32:06.607 --rc genhtml_function_coverage=1 00:32:06.607 --rc genhtml_legend=1 00:32:06.607 --rc geninfo_all_blocks=1 00:32:06.607 --rc geninfo_unexecuted_blocks=1 00:32:06.607 00:32:06.607 ' 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:06.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.607 --rc genhtml_branch_coverage=1 00:32:06.607 --rc genhtml_function_coverage=1 00:32:06.607 --rc genhtml_legend=1 00:32:06.607 --rc geninfo_all_blocks=1 00:32:06.607 --rc geninfo_unexecuted_blocks=1 00:32:06.607 00:32:06.607 ' 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:06.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.607 --rc genhtml_branch_coverage=1 00:32:06.607 --rc genhtml_function_coverage=1 00:32:06.607 --rc genhtml_legend=1 00:32:06.607 --rc geninfo_all_blocks=1 00:32:06.607 --rc geninfo_unexecuted_blocks=1 00:32:06.607 00:32:06.607 ' 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:06.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:06.607 --rc genhtml_branch_coverage=1 00:32:06.607 --rc genhtml_function_coverage=1 00:32:06.607 --rc genhtml_legend=1 00:32:06.607 --rc geninfo_all_blocks=1 00:32:06.607 --rc geninfo_unexecuted_blocks=1 00:32:06.607 00:32:06.607 ' 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:06.607 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:06.608 10:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:08.511 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:08.511 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:08.511 Found net devices under 0000:09:00.0: cvl_0_0 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:08.511 Found net devices under 0000:09:00.1: cvl_0_1 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:08.511 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:08.512 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:08.771 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:08.772 10:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:08.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:08.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:32:08.772 00:32:08.772 --- 10.0.0.2 ping statistics --- 00:32:08.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.772 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:08.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:08.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:32:08.772 00:32:08.772 --- 10.0.0.1 ping statistics --- 00:32:08.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:08.772 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2693896 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2693896 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2693896 ']' 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:08.772 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:08.772 [2024-12-09 10:42:41.134934] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:08.772 [2024-12-09 10:42:41.136029] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:32:08.772 [2024-12-09 10:42:41.136099] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:08.772 [2024-12-09 10:42:41.209361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.031 [2024-12-09 10:42:41.265860] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.031 [2024-12-09 10:42:41.265914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.031 [2024-12-09 10:42:41.265927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.031 [2024-12-09 10:42:41.265939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.031 [2024-12-09 10:42:41.265948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.031 [2024-12-09 10:42:41.266567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.031 [2024-12-09 10:42:41.353565] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:09.031 [2024-12-09 10:42:41.353834] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:09.031 [2024-12-09 10:42:41.403211] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:09.031 [2024-12-09 10:42:41.419375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.031 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:09.032 malloc0 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:09.032 { 00:32:09.032 "params": { 00:32:09.032 "name": "Nvme$subsystem", 00:32:09.032 "trtype": "$TEST_TRANSPORT", 00:32:09.032 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:09.032 "adrfam": "ipv4", 00:32:09.032 "trsvcid": "$NVMF_PORT", 00:32:09.032 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:09.032 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:09.032 "hdgst": ${hdgst:-false}, 00:32:09.032 "ddgst": ${ddgst:-false} 00:32:09.032 }, 00:32:09.032 "method": "bdev_nvme_attach_controller" 00:32:09.032 } 00:32:09.032 EOF 00:32:09.032 )") 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:09.032 10:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:09.032 "params": { 00:32:09.032 "name": "Nvme1", 00:32:09.032 "trtype": "tcp", 00:32:09.032 "traddr": "10.0.0.2", 00:32:09.032 "adrfam": "ipv4", 00:32:09.032 "trsvcid": "4420", 00:32:09.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:09.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:09.032 "hdgst": false, 00:32:09.032 "ddgst": false 00:32:09.032 }, 00:32:09.032 "method": "bdev_nvme_attach_controller" 00:32:09.032 }' 00:32:09.291 [2024-12-09 10:42:41.497569] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:32:09.291 [2024-12-09 10:42:41.497648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2694029 ] 00:32:09.291 [2024-12-09 10:42:41.563800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.291 [2024-12-09 10:42:41.621268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.549 Running I/O for 10 seconds... 00:32:11.423 5602.00 IOPS, 43.77 MiB/s [2024-12-09T09:42:45.237Z] 5669.00 IOPS, 44.29 MiB/s [2024-12-09T09:42:46.169Z] 5671.33 IOPS, 44.31 MiB/s [2024-12-09T09:42:47.100Z] 5672.75 IOPS, 44.32 MiB/s [2024-12-09T09:42:48.049Z] 5672.00 IOPS, 44.31 MiB/s [2024-12-09T09:42:48.983Z] 5672.33 IOPS, 44.32 MiB/s [2024-12-09T09:42:49.962Z] 5674.00 IOPS, 44.33 MiB/s [2024-12-09T09:42:50.895Z] 5681.62 IOPS, 44.39 MiB/s [2024-12-09T09:42:51.828Z] 5687.67 IOPS, 44.43 MiB/s [2024-12-09T09:42:52.086Z] 5690.80 IOPS, 44.46 MiB/s 00:32:19.645 Latency(us) 00:32:19.645 [2024-12-09T09:42:52.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:19.645 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:19.645 Verification LBA range: start 0x0 length 0x1000 00:32:19.645 Nvme1n1 : 10.02 5693.38 44.48 0.00 0.00 22419.30 2645.71 29127.11 00:32:19.645 [2024-12-09T09:42:52.086Z] =================================================================================================================== 00:32:19.645 [2024-12-09T09:42:52.086Z] Total : 5693.38 44.48 0.00 0.00 22419.30 2645.71 29127.11 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2695222 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:19.903 { 00:32:19.903 "params": { 00:32:19.903 "name": "Nvme$subsystem", 00:32:19.903 "trtype": "$TEST_TRANSPORT", 00:32:19.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:19.903 "adrfam": "ipv4", 00:32:19.903 "trsvcid": "$NVMF_PORT", 00:32:19.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:19.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:19.903 "hdgst": ${hdgst:-false}, 00:32:19.903 "ddgst": ${ddgst:-false} 00:32:19.903 }, 00:32:19.903 "method": "bdev_nvme_attach_controller" 00:32:19.903 } 00:32:19.903 EOF 00:32:19.903 )") 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:19.903 [2024-12-09 10:42:52.103111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.103173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:19.903 10:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:19.903 "params": { 00:32:19.903 "name": "Nvme1", 00:32:19.903 "trtype": "tcp", 00:32:19.903 "traddr": "10.0.0.2", 00:32:19.903 "adrfam": "ipv4", 00:32:19.903 "trsvcid": "4420", 00:32:19.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:19.903 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:19.903 "hdgst": false, 00:32:19.903 "ddgst": false 00:32:19.903 }, 00:32:19.903 "method": "bdev_nvme_attach_controller" 00:32:19.903 }' 00:32:19.903 [2024-12-09 10:42:52.111049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.111070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.119049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.119068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.127047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.127066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.135049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.135068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.141494] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:32:19.903 [2024-12-09 10:42:52.141567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2695222 ] 00:32:19.903 [2024-12-09 10:42:52.143047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.143066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.151047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.151066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.159047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.159066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.167046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.167065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.175047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.175066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.183047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.183066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.191047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.191066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.199047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.903 [2024-12-09 10:42:52.199065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.903 [2024-12-09 10:42:52.207047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.207065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.209200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:19.904 [2024-12-09 10:42:52.215056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.215077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.223079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.223109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.231053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.231073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.239049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.239067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.247048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.247067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.255048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.255067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.263047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.263065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.270855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.904 [2024-12-09 10:42:52.271049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.271068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.279048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.279066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.287071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.287099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.295075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.295106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.303076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.303110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.311078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.311110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.319081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.319114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.327078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.327111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.335075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.335107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:19.904 [2024-12-09 10:42:52.343064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:19.904 [2024-12-09 10:42:52.343089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.351085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.351133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.359078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.359111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.367077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.367110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.375057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.375078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.383049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.383068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.391054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.391090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.399055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.399093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.407053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.407074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.415053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.415073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.423050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.423071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.431049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.431068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.439048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.439067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.447048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.447066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.455048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.455066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.463052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.463072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.471053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.471074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.479048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.479067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.487047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.487066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.495047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.495066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.503047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.503065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.511047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.511065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.519049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.519070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.527048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.527067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.535048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.535066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.543048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.543066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.551049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.551068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.559054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.559076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.567049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.567083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.575053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.575073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.583062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.161 [2024-12-09 10:42:52.583082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.161 [2024-12-09 10:42:52.591066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.162 [2024-12-09 10:42:52.591087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.162 [2024-12-09 10:42:52.599069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.162 [2024-12-09 10:42:52.599093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.607055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.607079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.615053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.615076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.623054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.623077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 Running I/O for 5 seconds... 00:32:20.420 [2024-12-09 10:42:52.637846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.637874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.649315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.649341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.662571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.662613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.672733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.672759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.688384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.688409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.706900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.706925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.716900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.716926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.729030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.729055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.745043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.745068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.755224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.755250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.767211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.767253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.777972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.778010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.792072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.792098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.802217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.802242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.816644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.816669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.832945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.832971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.843029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.843054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.420 [2024-12-09 10:42:52.854920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.420 [2024-12-09 10:42:52.854944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:52.866269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:52.866295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:52.877203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:52.877231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:52.893512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:52.893536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:52.908757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:52.908797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:52.918236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:52.918263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:52.932913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:52.932938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:52.951727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:52.951752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:52.962220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:52.962247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:52.976357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:52.976383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:52.985616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:52.985642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:52.999759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:52.999785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:53.009203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:53.009228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:53.020896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:53.020929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:53.036035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:53.036060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:53.045835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:53.045859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:53.061455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:53.061480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:53.077618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:53.077660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:53.092455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:53.092483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:53.101377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:53.101404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.678 [2024-12-09 10:42:53.113442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.678 [2024-12-09 10:42:53.113467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.128647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.128690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.138554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.138581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.150491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.150518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.161745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.161770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.176884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.176911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.186669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.186696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.198785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.198811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.210189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.210216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.222729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.222756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.232579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.232605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.243933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.243958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.254938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.254974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.265852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.265875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.278739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.278765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.288398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.288437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.300278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.300304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.311227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.311251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.321934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.321973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.337417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.337443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.353483] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.353508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.936 [2024-12-09 10:42:53.371210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.936 [2024-12-09 10:42:53.371237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.214 [2024-12-09 10:42:53.381992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.214 [2024-12-09 10:42:53.382017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.214 [2024-12-09 10:42:53.395430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.214 [2024-12-09 10:42:53.395473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.214 [2024-12-09 10:42:53.405469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.214 [2024-12-09 10:42:53.405495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.214 [2024-12-09 10:42:53.419665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.214 [2024-12-09 10:42:53.419690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.214 [2024-12-09 10:42:53.429631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.214 [2024-12-09 10:42:53.429655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.214 [2024-12-09 10:42:53.443850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.214 [2024-12-09 10:42:53.443875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.214 [2024-12-09 10:42:53.453189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.214 [2024-12-09 10:42:53.453214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.214 [2024-12-09 10:42:53.468356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.214 [2024-12-09 10:42:53.468381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.478840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.478865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.492332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.492372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.502180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.502221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.516000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.516027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.525356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.525382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.537068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.537095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.552688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.552713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.570994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.571019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.580881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.580920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.592774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.592800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.607875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.607916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.617700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.617724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 11550.00 IOPS, 90.23 MiB/s [2024-12-09T09:42:53.656Z] [2024-12-09 10:42:53.633488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.633512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.215 [2024-12-09 10:42:53.648754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.215 [2024-12-09 10:42:53.648794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.658761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.658788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.670532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.670559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.681388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.681429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.694504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.694531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.703953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.703980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.715952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.715977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.726805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.726830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.737390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.737431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.752972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.752999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.762561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.762587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.774658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.774683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.787328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.787356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.796793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.796819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.812701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.812725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.822295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.822321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.833529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.833553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.848126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.848176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.857695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.857720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.873971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.873998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.883937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.883962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.895870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.895909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.473 [2024-12-09 10:42:53.906937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.473 [2024-12-09 10:42:53.906961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.731 [2024-12-09 10:42:53.917937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:53.917962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:53.932037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:53.932062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:53.941733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:53.941757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:53.957539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:53.957564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:53.972814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:53.972841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:53.982350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:53.982376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:53.994134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:53.994169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.009856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.009881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.024467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.024494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.033838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.033863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.050133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.050182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.059928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.059955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.071842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.071867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.082590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.082613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.093445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.093471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.108095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.108123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.117096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.117136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.128951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.128976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.145104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.145131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.154825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.154851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.732 [2024-12-09 10:42:54.166632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.732 [2024-12-09 10:42:54.166673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.177805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.177845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.191080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.191107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.200755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.200780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.212399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.212449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.223151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.223191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.233883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.233909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.248476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.248503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.258001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.258029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.269858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.269882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.284455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.284482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.294502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.294526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.306426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.306452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.316877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.316918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.328581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.328605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.339633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.339658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.350384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.350425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.364631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.364658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.374362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.374402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.386285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.386310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.397133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.397193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.413651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.413678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.990 [2024-12-09 10:42:54.429520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.990 [2024-12-09 10:42:54.429547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.444707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.444735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.454437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.454464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.466583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.466608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.477548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.477572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.493463] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.493489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.509036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.509064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.524729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.524756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.534355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.534381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.546002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.546026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.558975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.559002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.568673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.568698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.580169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.580209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.590291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.590317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.601791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.601816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.614583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.614610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.624370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.624397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 11578.00 IOPS, 90.45 MiB/s [2024-12-09T09:42:54.690Z] [2024-12-09 10:42:54.636414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.636464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.652644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.652683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.662410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.662451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.674387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.674413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.249 [2024-12-09 10:42:54.685267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.249 [2024-12-09 10:42:54.685292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.701326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.506 [2024-12-09 10:42:54.701368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.718861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.506 [2024-12-09 10:42:54.718887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.728968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.506 [2024-12-09 10:42:54.729009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.740581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.506 [2024-12-09 10:42:54.740622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.757181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.506 [2024-12-09 10:42:54.757207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.766762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.506 [2024-12-09 10:42:54.766788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.779035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.506 [2024-12-09 10:42:54.779061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.789905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.506 [2024-12-09 10:42:54.789930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.802681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.506 [2024-12-09 10:42:54.802708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.812379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.506 [2024-12-09 10:42:54.812407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.823990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.506 [2024-12-09 10:42:54.824029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.506 [2024-12-09 10:42:54.834221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.507 [2024-12-09 10:42:54.834248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.507 [2024-12-09 10:42:54.849238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.507 [2024-12-09 10:42:54.849264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.507 [2024-12-09 10:42:54.865272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.507 [2024-12-09 10:42:54.865298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.507 [2024-12-09 10:42:54.875075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.507 [2024-12-09 10:42:54.875111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.507 [2024-12-09 10:42:54.886794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.507 [2024-12-09 10:42:54.886820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.507 [2024-12-09 10:42:54.897372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.507 [2024-12-09 10:42:54.897400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.507 [2024-12-09 10:42:54.913638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.507 [2024-12-09 10:42:54.913664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.507 [2024-12-09 10:42:54.929344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.507 [2024-12-09 10:42:54.929371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.507 [2024-12-09 10:42:54.939017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.507 [2024-12-09 10:42:54.939042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.764 [2024-12-09 10:42:54.951316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.764 [2024-12-09 10:42:54.951360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.764 [2024-12-09 10:42:54.962685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.764 [2024-12-09 10:42:54.962711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:54.973653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:54.973678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:54.986487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:54.986529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:54.996026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:54.996051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.007732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.007757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.018296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.018322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.032897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.032922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.051171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.051198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.061246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.061271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.073398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.073438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.089632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.089657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.104947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.104988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.114633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.114659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.126495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.126520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.137302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.137328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.153117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.153163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.169085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.169111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.187041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.187067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.765 [2024-12-09 10:42:55.197068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.765 [2024-12-09 10:42:55.197092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.209060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.209086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.225284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.225326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.240557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.240584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.250076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.250101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.265089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.265115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.281115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.281175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.290804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.290830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.302496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.302520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.313198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.313224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.329193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.329233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.339307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.339333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.351203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.351228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.362318] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.362345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.376748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.376775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.386211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.386237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.399455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.399480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.409157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.409198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.424565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.424588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.434272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.434298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.446370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.446397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.022 [2024-12-09 10:42:55.460081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.022 [2024-12-09 10:42:55.460114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.469834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.469874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.483882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.483907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.493811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.493836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.509185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.509210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.518886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.518911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.530880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.530906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.541941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.541964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.555955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.555982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.565649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.565673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.579699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.579739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.588629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.588652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.600568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.600592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.616522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.616549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.626001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.626028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 11595.33 IOPS, 90.59 MiB/s [2024-12-09T09:42:55.720Z] [2024-12-09 10:42:55.639352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.639379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.648827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.648854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.660969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.660995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.674073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.674099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.688709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.688736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.698828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.698853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.279 [2024-12-09 10:42:55.710688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.279 [2024-12-09 10:42:55.710712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.722208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.722236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.734977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.735005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.744815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.744840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.756413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.756452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.766971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.766995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.777739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.777763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.792768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.792792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.802428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.802462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.814241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.814268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.828000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.828026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.837649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.837674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.851630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.851656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.860894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.860932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.872559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.872584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.891657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.891682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.902020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.902044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.914992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.553 [2024-12-09 10:42:55.915034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.553 [2024-12-09 10:42:55.924539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-12-09 10:42:55.924563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-12-09 10:42:55.940188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-12-09 10:42:55.940214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-12-09 10:42:55.949311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-12-09 10:42:55.949337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-12-09 10:42:55.964358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-12-09 10:42:55.964384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-12-09 10:42:55.973966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-12-09 10:42:55.973991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-12-09 10:42:55.987717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-12-09 10:42:55.987757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:55.997475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:55.997504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.013305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.013332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.031310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.031336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.041433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.041475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.055019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.055060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.065055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.065079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.076499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.076524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.092366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.092393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.102135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.102170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.117699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.117723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.133073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.133099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.142744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.142769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.154640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.154665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.165441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.165466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.181671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.181696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.196867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.196894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.206485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.206511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.218254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.218280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.233529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.233553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-12-09 10:42:56.250788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-12-09 10:42:56.250814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.260777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.260803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.276607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.276631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.286134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.286179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.302285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.302310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.312420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.312446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.324357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.324382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.340201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.340227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.349738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.349764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.365271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.365297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.380567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.380594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.389942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.389968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.403647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.403671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.413575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.413600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.428021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.428046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.437527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.437552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.451546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.451571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.461042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.461066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.472752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.472777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.489260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.489297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-12-09 10:42:56.507254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-12-09 10:42:56.507287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.330 [2024-12-09 10:42:56.517259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.330 [2024-12-09 10:42:56.517287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.330 [2024-12-09 10:42:56.530872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.330 [2024-12-09 10:42:56.530906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.540798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.540823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.556287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.556314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.565857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.565882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.581910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.581935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.597057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.597097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.615230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.615271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.625801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.625827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 11608.75 IOPS, 90.69 MiB/s [2024-12-09T09:42:56.772Z] [2024-12-09 10:42:56.640484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.640512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.650338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.650364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.663532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.663572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.672977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.673002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.684216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.684241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.693894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.693919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.709843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.709868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.724342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.724369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.734047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.734073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.748732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.748774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.758572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.758599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.331 [2024-12-09 10:42:56.770733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.331 [2024-12-09 10:42:56.770758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.781850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.781874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.797466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.797492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.813098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.813125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.822807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.822832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.834998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.835023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.845975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.846000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.860416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.860443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.869700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.869725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.883833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.883859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.893502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.893527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.907848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.907874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.918945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.918987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.929604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.929628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.945414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.589 [2024-12-09 10:42:56.945438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.589 [2024-12-09 10:42:56.963306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.590 [2024-12-09 10:42:56.963331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.590 [2024-12-09 10:42:56.974296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.590 [2024-12-09 10:42:56.974323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.590 [2024-12-09 10:42:56.989146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.590 [2024-12-09 10:42:56.989173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.590 [2024-12-09 10:42:56.998670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.590 [2024-12-09 10:42:56.998711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.590 [2024-12-09 10:42:57.010390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.590 [2024-12-09 10:42:57.010416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.590 [2024-12-09 10:42:57.021462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.590 [2024-12-09 10:42:57.021487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.849 [2024-12-09 10:42:57.037265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.849 [2024-12-09 10:42:57.037292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.849 [2024-12-09 10:42:57.046890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.849 [2024-12-09 10:42:57.046913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.849 [2024-12-09 10:42:57.058355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.849 [2024-12-09 10:42:57.058382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.849 [2024-12-09 10:42:57.069401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.849 [2024-12-09 10:42:57.069442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.849 [2024-12-09 10:42:57.085030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.849 [2024-12-09 10:42:57.085054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.849 [2024-12-09 10:42:57.094726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.849 [2024-12-09 10:42:57.094750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.849 [2024-12-09 10:42:57.106261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.849 [2024-12-09 10:42:57.106288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.849 [2024-12-09 10:42:57.116164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.849 [2024-12-09 10:42:57.116190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.849 [2024-12-09 10:42:57.127797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.849 [2024-12-09 10:42:57.127821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.138108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.138158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.148105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.148154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.159796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.159820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.170718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.170743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.183266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.183292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.192656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.192682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.204786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.204813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.220607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.220659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.230381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.230408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.242193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.242220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.252975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.252998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.268995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.269019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.850 [2024-12-09 10:42:57.278818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.850 [2024-12-09 10:42:57.278844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.290451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.290478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.300648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.300672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.311905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.311944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.322865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.322890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.333688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.333712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.348557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.348584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.358875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.358899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.370742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.370766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.381598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.381621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.396668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.396695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.406056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.406082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.420818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.420843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.438855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.438894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.448940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.448973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.460631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.460655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.477197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.477223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.494868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.494894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.504566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.504590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.520747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.520771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.537365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.537390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.108 [2024-12-09 10:42:57.547401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.108 [2024-12-09 10:42:57.547443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.559393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.559419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.570022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.570062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.585523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.585562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.602845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.602874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.613685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.613709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.629172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.629198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 11621.20 IOPS, 90.79 MiB/s [2024-12-09T09:42:57.808Z] [2024-12-09 10:42:57.638883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.638908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.647073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.647096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 00:32:25.367 Latency(us) 00:32:25.367 [2024-12-09T09:42:57.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.367 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:25.367 Nvme1n1 : 5.01 11622.50 90.80 0.00 0.00 10998.16 3228.25 17961.72 00:32:25.367 [2024-12-09T09:42:57.808Z] =================================================================================================================== 00:32:25.367 [2024-12-09T09:42:57.808Z] Total : 11622.50 90.80 0.00 0.00 10998.16 3228.25 17961.72 00:32:25.367 [2024-12-09 10:42:57.655053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.655084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.663052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.663074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.671084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.671112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.679111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.679171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.687107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.687169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.695104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.695165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.703106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.703164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.711108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.711161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.719107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.719168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.727104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.727163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.735107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.735166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.743109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.743167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.751111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.751170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.759107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.759189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.767109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.767160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.775107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.775156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.783077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.783109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.791050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.791068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.799053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.799073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.367 [2024-12-09 10:42:57.807061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.367 [2024-12-09 10:42:57.807085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 [2024-12-09 10:42:57.819125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.626 [2024-12-09 10:42:57.819188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 [2024-12-09 10:42:57.827109] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.626 [2024-12-09 10:42:57.827158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 [2024-12-09 10:42:57.835086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.626 [2024-12-09 10:42:57.835134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 [2024-12-09 10:42:57.843048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.626 [2024-12-09 10:42:57.843067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 [2024-12-09 10:42:57.851047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.626 [2024-12-09 10:42:57.851066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 [2024-12-09 10:42:57.859049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.626 [2024-12-09 10:42:57.859069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 [2024-12-09 10:42:57.867047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.626 [2024-12-09 10:42:57.867066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 [2024-12-09 10:42:57.875052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.626 [2024-12-09 10:42:57.875071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 [2024-12-09 10:42:57.883047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.626 [2024-12-09 10:42:57.883065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 [2024-12-09 10:42:57.891046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.626 [2024-12-09 10:42:57.891065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 [2024-12-09 10:42:57.899047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.626 [2024-12-09 10:42:57.899066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2695222) - No such process 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2695222 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.626 delay0 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.626 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:25.627 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.627 10:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:25.627 [2024-12-09 10:42:58.016225] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:33.734 Initializing NVMe Controllers 00:32:33.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:33.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:33.734 Initialization complete. Launching workers. 00:32:33.734 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 229, failed: 21939 00:32:33.734 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22041, failed to submit 127 00:32:33.734 success 21972, unsuccessful 69, failed 0 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:33.734 rmmod nvme_tcp 00:32:33.734 rmmod nvme_fabrics 00:32:33.734 rmmod nvme_keyring 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2693896 ']' 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2693896 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2693896 ']' 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2693896 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2693896 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2693896' 00:32:33.734 killing process with pid 2693896 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2693896 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2693896 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.734 10:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:35.645 00:32:35.645 real 0m28.797s 00:32:35.645 user 0m41.141s 00:32:35.645 sys 0m9.944s 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:35.645 ************************************ 00:32:35.645 END TEST nvmf_zcopy 00:32:35.645 ************************************ 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:35.645 ************************************ 00:32:35.645 START TEST nvmf_nmic 00:32:35.645 ************************************ 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:35.645 * Looking for test storage... 00:32:35.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:35.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.645 --rc genhtml_branch_coverage=1 00:32:35.645 --rc genhtml_function_coverage=1 00:32:35.645 --rc genhtml_legend=1 00:32:35.645 --rc geninfo_all_blocks=1 00:32:35.645 --rc geninfo_unexecuted_blocks=1 00:32:35.645 00:32:35.645 ' 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:35.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.645 --rc genhtml_branch_coverage=1 00:32:35.645 --rc genhtml_function_coverage=1 00:32:35.645 --rc genhtml_legend=1 00:32:35.645 --rc geninfo_all_blocks=1 00:32:35.645 --rc geninfo_unexecuted_blocks=1 00:32:35.645 00:32:35.645 ' 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:35.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.645 --rc genhtml_branch_coverage=1 00:32:35.645 --rc genhtml_function_coverage=1 00:32:35.645 --rc genhtml_legend=1 00:32:35.645 --rc geninfo_all_blocks=1 00:32:35.645 --rc geninfo_unexecuted_blocks=1 00:32:35.645 00:32:35.645 ' 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:35.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.645 --rc genhtml_branch_coverage=1 00:32:35.645 --rc genhtml_function_coverage=1 00:32:35.645 --rc genhtml_legend=1 00:32:35.645 --rc geninfo_all_blocks=1 00:32:35.645 --rc geninfo_unexecuted_blocks=1 00:32:35.645 00:32:35.645 ' 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:35.645 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:35.646 10:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:37.548 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:37.549 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:37.549 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:37.549 Found net devices under 0000:09:00.0: cvl_0_0 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:37.549 Found net devices under 0000:09:00.1: cvl_0_1 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:37.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:37.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:32:37.549 00:32:37.549 --- 10.0.0.2 ping statistics --- 00:32:37.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.549 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:32:37.549 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:37.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:37.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:32:37.806 00:32:37.806 --- 10.0.0.1 ping statistics --- 00:32:37.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.806 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:32:37.806 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:37.806 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:37.806 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:37.806 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:37.806 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:37.806 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:37.806 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:37.806 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:37.806 10:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2698718 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2698718 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2698718 ']' 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.807 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:37.807 [2024-12-09 10:43:10.078958] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:37.807 [2024-12-09 10:43:10.080025] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:32:37.807 [2024-12-09 10:43:10.080083] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.807 [2024-12-09 10:43:10.150594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:37.807 [2024-12-09 10:43:10.210307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:37.807 [2024-12-09 10:43:10.210361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:37.807 [2024-12-09 10:43:10.210390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:37.807 [2024-12-09 10:43:10.210402] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:37.807 [2024-12-09 10:43:10.210424] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:37.807 [2024-12-09 10:43:10.212067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.807 [2024-12-09 10:43:10.212165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:37.807 [2024-12-09 10:43:10.212199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:37.807 [2024-12-09 10:43:10.212203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:38.065 [2024-12-09 10:43:10.303863] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:38.065 [2024-12-09 10:43:10.304035] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:38.065 [2024-12-09 10:43:10.304319] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:38.065 [2024-12-09 10:43:10.304963] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:38.065 [2024-12-09 10:43:10.305221] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.065 [2024-12-09 10:43:10.360891] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.065 Malloc0 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.065 [2024-12-09 10:43:10.425105] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:38.065 test case1: single bdev can't be used in multiple subsystems 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.065 [2024-12-09 10:43:10.448850] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:38.065 [2024-12-09 10:43:10.448879] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:38.065 [2024-12-09 10:43:10.448909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:38.065 request: 00:32:38.065 { 00:32:38.065 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:38.065 "namespace": { 00:32:38.065 "bdev_name": "Malloc0", 00:32:38.065 "no_auto_visible": false, 00:32:38.065 "hide_metadata": false 00:32:38.065 }, 00:32:38.065 "method": "nvmf_subsystem_add_ns", 00:32:38.065 "req_id": 1 00:32:38.065 } 00:32:38.065 Got JSON-RPC error response 00:32:38.065 response: 00:32:38.065 { 00:32:38.065 "code": -32602, 00:32:38.065 "message": "Invalid parameters" 00:32:38.065 } 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:38.065 Adding namespace failed - expected result. 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:38.065 test case2: host connect to nvmf target in multiple paths 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:38.065 [2024-12-09 10:43:10.460925] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.065 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:38.323 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:38.580 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:38.580 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:38.580 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:38.580 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:38.580 10:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:41.100 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:41.100 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:41.100 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:41.101 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:41.101 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:41.101 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:41.101 10:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:41.101 [global] 00:32:41.101 thread=1 00:32:41.101 invalidate=1 00:32:41.101 rw=write 00:32:41.101 time_based=1 00:32:41.101 runtime=1 00:32:41.101 ioengine=libaio 00:32:41.101 direct=1 00:32:41.101 bs=4096 00:32:41.101 iodepth=1 00:32:41.101 norandommap=0 00:32:41.101 numjobs=1 00:32:41.101 00:32:41.101 verify_dump=1 00:32:41.101 verify_backlog=512 00:32:41.101 verify_state_save=0 00:32:41.101 do_verify=1 00:32:41.101 verify=crc32c-intel 00:32:41.101 [job0] 00:32:41.101 filename=/dev/nvme0n1 00:32:41.101 Could not set queue depth (nvme0n1) 00:32:41.101 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:41.101 fio-3.35 00:32:41.101 Starting 1 thread 00:32:42.034 00:32:42.034 job0: (groupid=0, jobs=1): err= 0: pid=2699111: Mon Dec 9 10:43:14 2024 00:32:42.034 read: IOPS=21, BW=86.2KiB/s (88.3kB/s)(88.0KiB/1021msec) 00:32:42.034 slat (nsec): min=15300, max=35297, avg=21838.05, stdev=8370.55 00:32:42.034 clat (usec): min=40661, max=41961, avg=41046.18, stdev=294.25 00:32:42.034 lat (usec): min=40677, max=41978, avg=41068.02, stdev=295.59 00:32:42.034 clat percentiles (usec): 00:32:42.034 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:32:42.034 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:42.034 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:42.034 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:42.034 | 99.99th=[42206] 00:32:42.034 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:32:42.034 slat (usec): min=6, max=29195, avg=70.94, stdev=1289.68 00:32:42.034 clat (usec): min=137, max=313, avg=154.91, stdev=10.69 00:32:42.034 lat (usec): min=143, max=29415, avg=225.86, stdev=1292.63 00:32:42.034 clat percentiles (usec): 00:32:42.034 | 1.00th=[ 141], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 149], 00:32:42.034 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:32:42.034 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 167], 00:32:42.034 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 314], 99.95th=[ 314], 00:32:42.034 | 99.99th=[ 314] 00:32:42.034 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:42.034 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:42.034 lat (usec) : 250=95.69%, 500=0.19% 00:32:42.034 lat (msec) : 50=4.12% 00:32:42.034 cpu : usr=0.39%, sys=0.69%, ctx=537, majf=0, minf=1 00:32:42.034 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:42.034 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.034 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:42.034 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:42.034 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:42.034 00:32:42.034 Run status group 0 (all jobs): 00:32:42.034 READ: bw=86.2KiB/s (88.3kB/s), 86.2KiB/s-86.2KiB/s (88.3kB/s-88.3kB/s), io=88.0KiB (90.1kB), run=1021-1021msec 00:32:42.034 WRITE: bw=2006KiB/s (2054kB/s), 2006KiB/s-2006KiB/s (2054kB/s-2054kB/s), io=2048KiB (2097kB), run=1021-1021msec 00:32:42.034 00:32:42.034 Disk stats (read/write): 00:32:42.034 nvme0n1: ios=46/512, merge=0/0, ticks=1772/78, in_queue=1850, util=98.70% 00:32:42.034 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:42.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:42.034 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:42.034 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:42.034 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:42.034 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:42.034 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:42.034 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:42.292 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:42.292 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:42.292 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:42.292 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:42.292 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:42.293 rmmod nvme_tcp 00:32:42.293 rmmod nvme_fabrics 00:32:42.293 rmmod nvme_keyring 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2698718 ']' 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2698718 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2698718 ']' 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2698718 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2698718 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2698718' 00:32:42.293 killing process with pid 2698718 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2698718 00:32:42.293 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2698718 00:32:42.552 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:42.552 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:42.552 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:42.553 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:42.553 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:42.553 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:42.553 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:42.553 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:42.553 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:42.553 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.553 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.553 10:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.455 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:44.455 00:32:44.455 real 0m9.250s 00:32:44.455 user 0m17.371s 00:32:44.455 sys 0m3.347s 00:32:44.455 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:44.455 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:44.455 ************************************ 00:32:44.455 END TEST nvmf_nmic 00:32:44.455 ************************************ 00:32:44.716 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:44.716 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:44.716 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:44.716 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:44.716 ************************************ 00:32:44.716 START TEST nvmf_fio_target 00:32:44.716 ************************************ 00:32:44.716 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:44.716 * Looking for test storage... 00:32:44.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:44.716 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:44.716 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:32:44.716 10:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:44.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.716 --rc genhtml_branch_coverage=1 00:32:44.716 --rc genhtml_function_coverage=1 00:32:44.716 --rc genhtml_legend=1 00:32:44.716 --rc geninfo_all_blocks=1 00:32:44.716 --rc geninfo_unexecuted_blocks=1 00:32:44.716 00:32:44.716 ' 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:44.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.716 --rc genhtml_branch_coverage=1 00:32:44.716 --rc genhtml_function_coverage=1 00:32:44.716 --rc genhtml_legend=1 00:32:44.716 --rc geninfo_all_blocks=1 00:32:44.716 --rc geninfo_unexecuted_blocks=1 00:32:44.716 00:32:44.716 ' 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:44.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.716 --rc genhtml_branch_coverage=1 00:32:44.716 --rc genhtml_function_coverage=1 00:32:44.716 --rc genhtml_legend=1 00:32:44.716 --rc geninfo_all_blocks=1 00:32:44.716 --rc geninfo_unexecuted_blocks=1 00:32:44.716 00:32:44.716 ' 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:44.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.716 --rc genhtml_branch_coverage=1 00:32:44.716 --rc genhtml_function_coverage=1 00:32:44.716 --rc genhtml_legend=1 00:32:44.716 --rc geninfo_all_blocks=1 00:32:44.716 --rc geninfo_unexecuted_blocks=1 00:32:44.716 00:32:44.716 ' 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.716 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:44.717 10:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:47.248 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:47.248 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.248 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:47.249 Found net devices under 0000:09:00.0: cvl_0_0 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:47.249 Found net devices under 0000:09:00.1: cvl_0_1 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:47.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:47.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:32:47.249 00:32:47.249 --- 10.0.0.2 ping statistics --- 00:32:47.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.249 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:47.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:47.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:32:47.249 00:32:47.249 --- 10.0.0.1 ping statistics --- 00:32:47.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:47.249 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2701306 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2701306 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2701306 ']' 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:47.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:47.249 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.249 [2024-12-09 10:43:19.568629] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:47.249 [2024-12-09 10:43:19.569696] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:32:47.249 [2024-12-09 10:43:19.569755] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:47.249 [2024-12-09 10:43:19.638846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:47.507 [2024-12-09 10:43:19.696040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:47.507 [2024-12-09 10:43:19.696089] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:47.507 [2024-12-09 10:43:19.696117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:47.507 [2024-12-09 10:43:19.696128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:47.507 [2024-12-09 10:43:19.696144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:47.507 [2024-12-09 10:43:19.697722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:47.507 [2024-12-09 10:43:19.697778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:47.508 [2024-12-09 10:43:19.697893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:47.508 [2024-12-09 10:43:19.697897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:47.508 [2024-12-09 10:43:19.782698] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:47.508 [2024-12-09 10:43:19.782900] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:47.508 [2024-12-09 10:43:19.783208] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:47.508 [2024-12-09 10:43:19.783876] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:47.508 [2024-12-09 10:43:19.784086] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:47.508 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:47.508 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:47.508 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:47.508 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:47.508 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:47.508 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:47.508 10:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:47.765 [2024-12-09 10:43:20.106707] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.765 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:17.481 Resuming build at Mon Dec 09 09:44:49 UTC 2024 after Jenkins restart 00:34:21.086 Waiting for reconnection of GP6 before proceeding with build 00:34:21.383 Timeout set to expire in 30 min 00:34:21.408 Ready to run at Mon Dec 09 09:44:53 UTC 2024 00:34:21.724 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:21.725 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.725 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:21.725 10:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.727 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:21.727 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.727 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:21.728 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:21.728 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.728 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:21.729 10:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.729 10:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:21.730 10:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:21.730 10:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:21.731 10:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:21.731 10:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:21.732 10:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:21.732 10:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:21.732 10:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:21.733 10:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:21.734 10:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.734 [2024-12-09 10:43:23.934873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.735 10:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:21.735 10:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:21.736 10:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:21.736 10:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:21.736 10:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:21.737 10:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:21.737 10:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:21.738 10:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:21.738 10:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:21.738 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:21.739 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:21.739 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:21.740 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:21.740 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:21.740 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:21.741 10:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:21.741 [global] 00:34:21.741 thread=1 00:34:21.741 invalidate=1 00:34:21.741 rw=write 00:34:21.743 time_based=1 00:34:21.743 runtime=1 00:34:21.743 ioengine=libaio 00:34:21.743 direct=1 00:34:21.744 bs=4096 00:34:21.744 iodepth=1 00:34:21.744 norandommap=0 00:34:21.744 numjobs=1 00:34:21.744 00:34:21.744 verify_dump=1 00:34:21.744 verify_backlog=512 00:34:21.744 verify_state_save=0 00:34:21.744 do_verify=1 00:34:21.744 verify=crc32c-intel 00:34:21.744 [job0] 00:34:21.744 filename=/dev/nvme0n1 00:34:21.744 [job1] 00:34:21.744 filename=/dev/nvme0n2 00:34:21.744 [job2] 00:34:21.744 filename=/dev/nvme0n3 00:34:21.744 [job3] 00:34:21.744 filename=/dev/nvme0n4 00:34:21.745 Could not set queue depth (nvme0n1) 00:34:21.745 Could not set queue depth (nvme0n2) 00:34:21.745 Could not set queue depth (nvme0n3) 00:34:21.745 Could not set queue depth (nvme0n4) 00:34:21.745 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.745 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.746 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.746 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.746 fio-3.35 00:34:21.746 Starting 4 threads 00:34:21.746 00:34:21.747 job0: (groupid=0, jobs=1): err= 0: pid=2702262: Mon Dec 9 10:43:28 2024 00:34:21.747 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:34:21.747 slat (nsec): min=13740, max=42128, avg=28852.14, stdev=9183.15 00:34:21.747 clat (usec): min=40923, max=42019, avg=41708.81, stdev=435.67 00:34:21.748 lat (usec): min=40958, max=42035, avg=41737.67, stdev=434.36 00:34:21.748 clat percentiles (usec): 00:34:21.748 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:21.748 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:21.748 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:21.749 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:21.749 | 99.99th=[42206] 00:34:21.749 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:34:21.749 slat (nsec): min=7620, max=38763, avg=13656.16, stdev=6581.94 00:34:21.750 clat (usec): min=193, max=604, avg=240.88, stdev=30.20 00:34:21.750 lat (usec): min=206, max=615, avg=254.54, stdev=30.93 00:34:21.750 clat percentiles (usec): 00:34:21.750 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 223], 00:34:21.750 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 243], 00:34:21.751 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 269], 95.00th=[ 273], 00:34:21.751 | 99.00th=[ 302], 99.50th=[ 424], 99.90th=[ 603], 99.95th=[ 603], 00:34:21.751 | 99.99th=[ 603] 00:34:21.751 bw ( KiB/s): min= 4096, max= 4096, per=41.40%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.752 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.752 lat (usec) : 250=68.67%, 500=27.02%, 750=0.38% 00:34:21.752 lat (msec) : 50=3.94% 00:34:21.752 cpu : usr=0.69%, sys=0.79%, ctx=533, majf=0, minf=2 00:34:21.752 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.753 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.753 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.753 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.753 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.754 job1: (groupid=0, jobs=1): err= 0: pid=2702263: Mon Dec 9 10:43:28 2024 00:34:21.754 read: IOPS=46, BW=187KiB/s (191kB/s)(192KiB/1028msec) 00:34:21.754 slat (nsec): min=8561, max=35253, avg=21039.62, stdev=9209.43 00:34:21.754 clat (usec): min=259, max=41226, avg=18130.93, stdev=20339.76 00:34:21.755 lat (usec): min=273, max=41244, avg=18151.97, stdev=20345.58 00:34:21.755 clat percentiles (usec): 00:34:21.755 | 1.00th=[ 260], 5.00th=[ 338], 10.00th=[ 367], 20.00th=[ 388], 00:34:21.755 | 30.00th=[ 392], 40.00th=[ 400], 50.00th=[ 404], 60.00th=[40633], 00:34:21.755 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:21.756 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:21.756 | 99.99th=[41157] 00:34:21.756 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:34:21.756 slat (usec): min=5, max=41658, avg=96.19, stdev=1840.41 00:34:21.756 clat (usec): min=149, max=908, avg=205.03, stdev=41.67 00:34:21.756 lat (usec): min=157, max=41829, avg=301.22, stdev=1839.38 00:34:21.756 clat percentiles (usec): 00:34:21.757 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:34:21.757 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 206], 00:34:21.757 | 70.00th=[ 219], 80.00th=[ 229], 90.00th=[ 239], 95.00th=[ 247], 00:34:21.757 | 99.00th=[ 289], 99.50th=[ 363], 99.90th=[ 906], 99.95th=[ 906], 00:34:21.757 | 99.99th=[ 906] 00:34:21.758 bw ( KiB/s): min= 4096, max= 4096, per=41.40%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.758 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.758 lat (usec) : 250=87.86%, 500=8.21%, 1000=0.18% 00:34:21.758 lat (msec) : 50=3.75% 00:34:21.758 cpu : usr=0.49%, sys=0.97%, ctx=563, majf=0, minf=1 00:34:21.759 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.759 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.759 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.759 issued rwts: total=48,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.759 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.759 job2: (groupid=0, jobs=1): err= 0: pid=2702264: Mon Dec 9 10:43:28 2024 00:34:21.760 read: IOPS=35, BW=143KiB/s (146kB/s)(148KiB/1035msec) 00:34:21.760 slat (nsec): min=9103, max=33897, avg=19983.49, stdev=10335.57 00:34:21.760 clat (usec): min=222, max=41162, avg=24829.73, stdev=19938.36 00:34:21.760 lat (usec): min=232, max=41195, avg=24849.71, stdev=19945.21 00:34:21.760 clat percentiles (usec): 00:34:21.761 | 1.00th=[ 223], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 231], 00:34:21.761 | 30.00th=[ 293], 40.00th=[13698], 50.00th=[41157], 60.00th=[41157], 00:34:21.761 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:21.761 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:21.761 | 99.99th=[41157] 00:34:21.761 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:34:21.762 slat (nsec): min=6022, max=42153, avg=10089.52, stdev=4328.08 00:34:21.762 clat (usec): min=159, max=726, avg=210.64, stdev=41.41 00:34:21.762 lat (usec): min=167, max=734, avg=220.73, stdev=41.86 00:34:21.762 clat percentiles (usec): 00:34:21.762 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:34:21.763 | 30.00th=[ 186], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 217], 00:34:21.763 | 70.00th=[ 227], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 260], 00:34:21.763 | 99.00th=[ 330], 99.50th=[ 388], 99.90th=[ 725], 99.95th=[ 725], 00:34:21.763 | 99.99th=[ 725] 00:34:21.763 bw ( KiB/s): min= 4096, max= 4096, per=41.40%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.764 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.764 lat (usec) : 250=87.80%, 500=7.65%, 750=0.36% 00:34:21.764 lat (msec) : 20=0.18%, 50=4.01% 00:34:21.764 cpu : usr=0.29%, sys=0.48%, ctx=551, majf=0, minf=1 00:34:21.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.765 issued rwts: total=37,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.766 job3: (groupid=0, jobs=1): err= 0: pid=2702265: Mon Dec 9 10:43:28 2024 00:34:21.766 read: IOPS=512, BW=2048KiB/s (2098kB/s)(2112KiB/1031msec) 00:34:21.766 slat (nsec): min=7838, max=35253, avg=16895.69, stdev=3659.44 00:34:21.767 clat (usec): min=260, max=41068, avg=1517.16, stdev=6975.44 00:34:21.767 lat (usec): min=276, max=41086, avg=1534.06, stdev=6977.57 00:34:21.767 clat percentiles (usec): 00:34:21.767 | 1.00th=[ 265], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 277], 00:34:21.768 | 30.00th=[ 281], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:34:21.768 | 70.00th=[ 289], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 314], 00:34:21.768 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:21.769 | 99.99th=[41157] 00:34:21.769 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:34:21.769 slat (nsec): min=7266, max=42006, avg=13291.51, stdev=6205.90 00:34:21.769 clat (usec): min=147, max=344, avg=196.01, stdev=32.83 00:34:21.770 lat (usec): min=157, max=367, avg=209.30, stdev=35.47 00:34:21.770 clat percentiles (usec): 00:34:21.770 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:34:21.770 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 200], 00:34:21.770 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 245], 00:34:21.771 | 99.00th=[ 318], 99.50th=[ 322], 99.90th=[ 330], 99.95th=[ 347], 00:34:21.771 | 99.99th=[ 347] 00:34:21.771 bw ( KiB/s): min= 8192, max= 8192, per=82.80%, avg=8192.00, stdev= 0.00, samples=1 00:34:21.771 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:34:21.771 lat (usec) : 250=63.40%, 500=35.57% 00:34:21.771 lat (msec) : 50=1.03% 00:34:21.772 cpu : usr=1.36%, sys=3.20%, ctx=1552, majf=0, minf=2 00:34:21.772 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.772 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.772 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.773 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.773 00:34:21.773 Run status group 0 (all jobs): 00:34:21.773 READ: bw=2450KiB/s (2509kB/s), 83.2KiB/s-2048KiB/s (85.2kB/s-2098kB/s), io=2536KiB (2597kB), run=1009-1035msec 00:34:21.774 WRITE: bw=9894KiB/s (10.1MB/s), 1979KiB/s-3973KiB/s (2026kB/s-4068kB/s), io=10.0MiB (10.5MB), run=1009-1035msec 00:34:21.774 00:34:21.774 Disk stats (read/write): 00:34:21.774 nvme0n1: ios=67/512, merge=0/0, ticks=736/115, in_queue=851, util=86.77% 00:34:21.775 nvme0n2: ios=97/512, merge=0/0, ticks=963/102, in_queue=1065, util=100.00% 00:34:21.775 nvme0n3: ios=87/512, merge=0/0, ticks=1720/105, in_queue=1825, util=98.12% 00:34:21.775 nvme0n4: ios=523/1024, merge=0/0, ticks=587/196, in_queue=783, util=89.68% 00:34:21.776 10:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:21.776 [global] 00:34:21.776 thread=1 00:34:21.776 invalidate=1 00:34:21.776 rw=randwrite 00:34:21.776 time_based=1 00:34:21.776 runtime=1 00:34:21.776 ioengine=libaio 00:34:21.776 direct=1 00:34:21.776 bs=4096 00:34:21.776 iodepth=1 00:34:21.776 norandommap=0 00:34:21.776 numjobs=1 00:34:21.776 00:34:21.777 verify_dump=1 00:34:21.777 verify_backlog=512 00:34:21.777 verify_state_save=0 00:34:21.777 do_verify=1 00:34:21.777 verify=crc32c-intel 00:34:21.777 [job0] 00:34:21.777 filename=/dev/nvme0n1 00:34:21.777 [job1] 00:34:21.777 filename=/dev/nvme0n2 00:34:21.777 [job2] 00:34:21.777 filename=/dev/nvme0n3 00:34:21.777 [job3] 00:34:21.777 filename=/dev/nvme0n4 00:34:21.778 Could not set queue depth (nvme0n1) 00:34:21.778 Could not set queue depth (nvme0n2) 00:34:21.778 Could not set queue depth (nvme0n3) 00:34:21.778 Could not set queue depth (nvme0n4) 00:34:21.778 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.779 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.779 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.780 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.780 fio-3.35 00:34:21.780 Starting 4 threads 00:34:21.780 00:34:21.780 job0: (groupid=0, jobs=1): err= 0: pid=2702607: Mon Dec 9 10:43:29 2024 00:34:21.780 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:34:21.780 slat (nsec): min=5483, max=36639, avg=10285.72, stdev=4994.05 00:34:21.781 clat (usec): min=198, max=41001, avg=682.51, stdev=3994.40 00:34:21.781 lat (usec): min=204, max=41020, avg=692.80, stdev=3995.16 00:34:21.781 clat percentiles (usec): 00:34:21.781 | 1.00th=[ 225], 5.00th=[ 235], 10.00th=[ 241], 20.00th=[ 245], 00:34:21.782 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 265], 60.00th=[ 273], 00:34:21.782 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 347], 95.00th=[ 490], 00:34:21.782 | 99.00th=[ 807], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:21.782 | 99.99th=[41157] 00:34:21.782 write: IOPS=1236, BW=4947KiB/s (5066kB/s)(4952KiB/1001msec); 0 zone resets 00:34:21.783 slat (nsec): min=7108, max=45260, avg=13822.34, stdev=6308.50 00:34:21.783 clat (usec): min=158, max=414, avg=214.12, stdev=51.07 00:34:21.783 lat (usec): min=169, max=424, avg=227.94, stdev=54.16 00:34:21.783 clat percentiles (usec): 00:34:21.784 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:34:21.784 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 194], 60.00th=[ 200], 00:34:21.784 | 70.00th=[ 219], 80.00th=[ 245], 90.00th=[ 306], 95.00th=[ 318], 00:34:21.784 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 400], 99.95th=[ 416], 00:34:21.784 | 99.99th=[ 416] 00:34:21.785 bw ( KiB/s): min= 4096, max= 4096, per=22.50%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.785 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.785 lat (usec) : 250=58.66%, 500=39.17%, 750=1.55%, 1000=0.18% 00:34:21.785 lat (msec) : 50=0.44% 00:34:21.786 cpu : usr=2.80%, sys=3.10%, ctx=2263, majf=0, minf=2 00:34:21.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.787 issued rwts: total=1024,1238,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.787 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.787 job1: (groupid=0, jobs=1): err= 0: pid=2702608: Mon Dec 9 10:43:29 2024 00:34:21.787 read: IOPS=159, BW=639KiB/s (654kB/s)(640KiB/1002msec) 00:34:21.788 slat (nsec): min=6145, max=35901, avg=8850.67, stdev=6136.19 00:34:21.788 clat (usec): min=217, max=41005, avg=5286.81, stdev=13359.68 00:34:21.788 lat (usec): min=224, max=41023, avg=5295.66, stdev=13364.66 00:34:21.788 clat percentiles (usec): 00:34:21.789 | 1.00th=[ 223], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 239], 00:34:21.789 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:34:21.789 | 70.00th=[ 260], 80.00th=[ 289], 90.00th=[41157], 95.00th=[41157], 00:34:21.789 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:21.790 | 99.99th=[41157] 00:34:21.790 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:34:21.790 slat (nsec): min=8043, max=38957, avg=11564.58, stdev=3882.50 00:34:21.790 clat (usec): min=155, max=619, avg=285.56, stdev=89.17 00:34:21.791 lat (usec): min=163, max=634, avg=297.12, stdev=90.14 00:34:21.791 clat percentiles (usec): 00:34:21.791 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 194], 00:34:21.791 | 30.00th=[ 229], 40.00th=[ 262], 50.00th=[ 281], 60.00th=[ 293], 00:34:21.791 | 70.00th=[ 318], 80.00th=[ 388], 90.00th=[ 412], 95.00th=[ 445], 00:34:21.793 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 619], 99.95th=[ 619], 00:34:21.793 | 99.99th=[ 619] 00:34:21.793 bw ( KiB/s): min= 4096, max= 4096, per=22.50%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.794 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.794 lat (usec) : 250=38.54%, 500=57.29%, 750=1.19% 00:34:21.794 lat (msec) : 50=2.98% 00:34:21.794 cpu : usr=0.50%, sys=1.00%, ctx=674, majf=0, minf=1 00:34:21.794 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.795 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.795 issued rwts: total=160,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.796 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.796 job2: (groupid=0, jobs=1): err= 0: pid=2702609: Mon Dec 9 10:43:29 2024 00:34:21.796 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:34:21.796 slat (nsec): min=5683, max=50069, avg=10535.77, stdev=5293.73 00:34:21.796 clat (usec): min=203, max=552, avg=241.57, stdev=29.94 00:34:21.797 lat (usec): min=209, max=572, avg=252.11, stdev=32.50 00:34:21.797 clat percentiles (usec): 00:34:21.797 | 1.00th=[ 210], 5.00th=[ 212], 10.00th=[ 215], 20.00th=[ 223], 00:34:21.797 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 245], 00:34:21.798 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 265], 00:34:21.798 | 99.00th=[ 404], 99.50th=[ 490], 99.90th=[ 537], 99.95th=[ 537], 00:34:21.798 | 99.99th=[ 553] 00:34:21.798 write: IOPS=2358, BW=9435KiB/s (9661kB/s)(9444KiB/1001msec); 0 zone resets 00:34:21.799 slat (nsec): min=7223, max=55155, avg=13950.92, stdev=6981.60 00:34:21.799 clat (usec): min=149, max=322, avg=183.20, stdev=24.86 00:34:21.799 lat (usec): min=157, max=348, avg=197.15, stdev=29.76 00:34:21.799 clat percentiles (usec): 00:34:21.799 | 1.00th=[ 151], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:34:21.800 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:34:21.800 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 210], 95.00th=[ 237], 00:34:21.800 | 99.00th=[ 251], 99.50th=[ 302], 99.90th=[ 322], 99.95th=[ 322], 00:34:21.800 | 99.99th=[ 322] 00:34:21.801 bw ( KiB/s): min=11096, max=11096, per=60.96%, avg=11096.00, stdev= 0.00, samples=1 00:34:21.801 iops : min= 2774, max= 2774, avg=2774.00, stdev= 0.00, samples=1 00:34:21.801 lat (usec) : 250=86.85%, 500=12.93%, 750=0.23% 00:34:21.801 cpu : usr=3.90%, sys=7.40%, ctx=4411, majf=0, minf=1 00:34:21.802 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.802 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.802 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.802 issued rwts: total=2048,2361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.803 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.803 job3: (groupid=0, jobs=1): err= 0: pid=2702610: Mon Dec 9 10:43:29 2024 00:34:21.803 read: IOPS=23, BW=94.5KiB/s (96.8kB/s)(96.0KiB/1016msec) 00:34:21.803 slat (nsec): min=7045, max=34384, avg=21255.21, stdev=9847.19 00:34:21.804 clat (usec): min=404, max=41344, avg=35946.16, stdev=13657.97 00:34:21.804 lat (usec): min=423, max=41351, avg=35967.42, stdev=13658.40 00:34:21.804 clat percentiles (usec): 00:34:21.804 | 1.00th=[ 404], 5.00th=[ 529], 10.00th=[ 783], 20.00th=[40633], 00:34:21.805 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:21.805 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:21.805 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:21.805 | 99.99th=[41157] 00:34:21.805 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:34:21.806 slat (nsec): min=6337, max=32681, avg=11805.02, stdev=4526.65 00:34:21.806 clat (usec): min=147, max=2318, avg=281.90, stdev=114.73 00:34:21.806 lat (usec): min=155, max=2327, avg=293.70, stdev=113.70 00:34:21.806 clat percentiles (usec): 00:34:21.807 | 1.00th=[ 159], 5.00th=[ 174], 10.00th=[ 188], 20.00th=[ 206], 00:34:21.807 | 30.00th=[ 233], 40.00th=[ 245], 50.00th=[ 265], 60.00th=[ 302], 00:34:21.807 | 70.00th=[ 318], 80.00th=[ 347], 90.00th=[ 379], 95.00th=[ 383], 00:34:21.807 | 99.00th=[ 441], 99.50th=[ 474], 99.90th=[ 2311], 99.95th=[ 2311], 00:34:21.807 | 99.99th=[ 2311] 00:34:21.808 bw ( KiB/s): min= 4096, max= 4096, per=22.50%, avg=4096.00, stdev= 0.00, samples=1 00:34:21.808 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:21.808 lat (usec) : 250=39.93%, 500=55.41%, 750=0.37%, 1000=0.19% 00:34:21.808 lat (msec) : 4=0.19%, 50=3.92% 00:34:21.809 cpu : usr=0.20%, sys=0.69%, ctx=537, majf=0, minf=1 00:34:21.809 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.810 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.810 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.810 00:34:21.810 Run status group 0 (all jobs): 00:34:21.810 READ: bw=12.5MiB/s (13.1MB/s), 94.5KiB/s-8184KiB/s (96.8kB/s-8380kB/s), io=12.7MiB (13.3MB), run=1001-1016msec 00:34:21.811 WRITE: bw=17.8MiB/s (18.6MB/s), 2016KiB/s-9435KiB/s (2064kB/s-9661kB/s), io=18.1MiB (18.9MB), run=1001-1016msec 00:34:21.811 00:34:21.811 Disk stats (read/write): 00:34:21.811 nvme0n1: ios=740/1024, merge=0/0, ticks=633/196, in_queue=829, util=87.17% 00:34:21.811 nvme0n2: ios=202/512, merge=0/0, ticks=1044/138, in_queue=1182, util=98.07% 00:34:21.812 nvme0n3: ios=1791/2048, merge=0/0, ticks=1387/351, in_queue=1738, util=97.71% 00:34:21.812 nvme0n4: ios=71/512, merge=0/0, ticks=733/145, in_queue=878, util=90.67% 00:34:21.813 10:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:21.813 [global] 00:34:21.813 thread=1 00:34:21.813 invalidate=1 00:34:21.813 rw=write 00:34:21.813 time_based=1 00:34:21.813 runtime=1 00:34:21.813 ioengine=libaio 00:34:21.813 direct=1 00:34:21.813 bs=4096 00:34:21.813 iodepth=128 00:34:21.813 norandommap=0 00:34:21.813 numjobs=1 00:34:21.813 00:34:21.813 verify_dump=1 00:34:21.813 verify_backlog=512 00:34:21.813 verify_state_save=0 00:34:21.813 do_verify=1 00:34:21.813 verify=crc32c-intel 00:34:21.813 [job0] 00:34:21.814 filename=/dev/nvme0n1 00:34:21.814 [job1] 00:34:21.814 filename=/dev/nvme0n2 00:34:21.814 [job2] 00:34:21.814 filename=/dev/nvme0n3 00:34:21.814 [job3] 00:34:21.814 filename=/dev/nvme0n4 00:34:21.814 Could not set queue depth (nvme0n1) 00:34:21.814 Could not set queue depth (nvme0n2) 00:34:21.814 Could not set queue depth (nvme0n3) 00:34:21.814 Could not set queue depth (nvme0n4) 00:34:21.817 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.817 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.818 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.818 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.818 fio-3.35 00:34:21.818 Starting 4 threads 00:34:21.818 00:34:21.819 job0: (groupid=0, jobs=1): err= 0: pid=2702846: Mon Dec 9 10:43:31 2024 00:34:21.819 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:34:21.819 slat (nsec): min=1946, max=19187k, avg=84851.10, stdev=813603.86 00:34:21.819 clat (usec): min=2589, max=53177, avg=13794.15, stdev=7362.53 00:34:21.819 lat (usec): min=2592, max=53183, avg=13879.01, stdev=7427.22 00:34:21.820 clat percentiles (usec): 00:34:21.820 | 1.00th=[ 4080], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 9372], 00:34:21.820 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10552], 60.00th=[11731], 00:34:21.820 | 70.00th=[13829], 80.00th=[17171], 90.00th=[25297], 95.00th=[29230], 00:34:21.821 | 99.00th=[43254], 99.50th=[46924], 99.90th=[52691], 99.95th=[53216], 00:34:21.821 | 99.99th=[53216] 00:34:21.821 write: IOPS=4833, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1004msec); 0 zone resets 00:34:21.821 slat (usec): min=2, max=22416, avg=84.08, stdev=690.26 00:34:21.821 clat (usec): min=748, max=48481, avg=13181.22, stdev=5689.08 00:34:21.822 lat (usec): min=755, max=48485, avg=13265.30, stdev=5740.88 00:34:21.822 clat percentiles (usec): 00:34:21.822 | 1.00th=[ 3818], 5.00th=[ 6783], 10.00th=[ 7963], 20.00th=[ 9765], 00:34:21.823 | 30.00th=[10290], 40.00th=[10683], 50.00th=[10945], 60.00th=[11338], 00:34:21.823 | 70.00th=[13960], 80.00th=[19006], 90.00th=[21890], 95.00th=[23987], 00:34:21.823 | 99.00th=[29754], 99.50th=[32375], 99.90th=[41157], 99.95th=[48497], 00:34:21.823 | 99.99th=[48497] 00:34:21.824 bw ( KiB/s): min=16384, max=21424, per=28.99%, avg=18904.00, stdev=3563.82, samples=2 00:34:21.824 iops : min= 4096, max= 5356, avg=4726.00, stdev=890.95, samples=2 00:34:21.824 lat (usec) : 750=0.02% 00:34:21.825 lat (msec) : 4=0.92%, 10=23.70%, 20=60.33%, 50=14.90%, 100=0.13% 00:34:21.825 cpu : usr=2.19%, sys=3.79%, ctx=350, majf=0, minf=1 00:34:21.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:21.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.826 issued rwts: total=4608,4853,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.827 job1: (groupid=0, jobs=1): err= 0: pid=2702848: Mon Dec 9 10:43:31 2024 00:34:21.828 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:34:21.828 slat (usec): min=3, max=18850, avg=123.10, stdev=1033.17 00:34:21.828 clat (usec): min=6625, max=38186, avg=16289.68, stdev=5487.30 00:34:21.829 lat (usec): min=6632, max=38190, avg=16412.78, stdev=5572.56 00:34:21.829 clat percentiles (usec): 00:34:21.829 | 1.00th=[ 9241], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11469], 00:34:21.830 | 30.00th=[13042], 40.00th=[14353], 50.00th=[15008], 60.00th=[15926], 00:34:21.830 | 70.00th=[19530], 80.00th=[20055], 90.00th=[22414], 95.00th=[28443], 00:34:21.830 | 99.00th=[31589], 99.50th=[33162], 99.90th=[38011], 99.95th=[38011], 00:34:21.831 | 99.99th=[38011] 00:34:21.831 write: IOPS=3971, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1006msec); 0 zone resets 00:34:21.831 slat (usec): min=4, max=23583, avg=121.12, stdev=876.65 00:34:21.832 clat (usec): min=1371, max=51481, avg=17349.39, stdev=9609.50 00:34:21.832 lat (usec): min=1880, max=51501, avg=17470.52, stdev=9686.40 00:34:21.832 clat percentiles (usec): 00:34:21.832 | 1.00th=[ 3752], 5.00th=[ 6521], 10.00th=[ 8291], 20.00th=[10159], 00:34:21.833 | 30.00th=[10945], 40.00th=[12256], 50.00th=[15008], 60.00th=[18482], 00:34:21.833 | 70.00th=[20579], 80.00th=[22152], 90.00th=[26870], 95.00th=[40633], 00:34:21.833 | 99.00th=[49021], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:34:21.833 | 99.99th=[51643] 00:34:21.834 bw ( KiB/s): min=12288, max=18648, per=23.72%, avg=15468.00, stdev=4497.20, samples=2 00:34:21.834 iops : min= 3072, max= 4662, avg=3867.00, stdev=1124.30, samples=2 00:34:21.834 lat (msec) : 2=0.12%, 4=0.53%, 10=13.56%, 20=59.49%, 50=25.93% 00:34:21.835 lat (msec) : 100=0.37% 00:34:21.835 cpu : usr=2.29%, sys=5.47%, ctx=314, majf=0, minf=1 00:34:21.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:21.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.837 issued rwts: total=3584,3995,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.837 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.837 job2: (groupid=0, jobs=1): err= 0: pid=2702849: Mon Dec 9 10:43:31 2024 00:34:21.838 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:34:21.838 slat (usec): min=3, max=8436, avg=107.89, stdev=597.06 00:34:21.838 clat (usec): min=2878, max=31363, avg=13945.29, stdev=3340.09 00:34:21.838 lat (usec): min=2883, max=31400, avg=14053.18, stdev=3390.26 00:34:21.838 clat percentiles (usec): 00:34:21.839 | 1.00th=[ 3949], 5.00th=[10290], 10.00th=[11076], 20.00th=[12256], 00:34:21.839 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13435], 60.00th=[13960], 00:34:21.839 | 70.00th=[14746], 80.00th=[15401], 90.00th=[17433], 95.00th=[20055], 00:34:21.840 | 99.00th=[25560], 99.50th=[27132], 99.90th=[28443], 99.95th=[28443], 00:34:21.840 | 99.99th=[31327] 00:34:21.840 write: IOPS=4458, BW=17.4MiB/s (18.3MB/s)(17.5MiB/1005msec); 0 zone resets 00:34:21.840 slat (usec): min=4, max=32950, avg=114.76, stdev=742.71 00:34:21.841 clat (usec): min=709, max=50855, avg=15673.55, stdev=6275.56 00:34:21.841 lat (usec): min=1101, max=50862, avg=15788.31, stdev=6301.15 00:34:21.841 clat percentiles (usec): 00:34:21.841 | 1.00th=[ 4752], 5.00th=[ 8848], 10.00th=[11469], 20.00th=[12780], 00:34:21.842 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:34:21.842 | 70.00th=[15008], 80.00th=[20055], 90.00th=[23462], 95.00th=[25822], 00:34:21.842 | 99.00th=[42730], 99.50th=[46400], 99.90th=[47449], 99.95th=[47449], 00:34:21.842 | 99.99th=[50594] 00:34:21.843 bw ( KiB/s): min=15184, max=19640, per=26.70%, avg=17412.00, stdev=3150.87, samples=2 00:34:21.843 iops : min= 3796, max= 4910, avg=4353.00, stdev=787.72, samples=2 00:34:21.843 lat (usec) : 750=0.01% 00:34:21.843 lat (msec) : 2=0.02%, 4=1.08%, 10=3.98%, 20=82.28%, 50=12.62% 00:34:21.844 lat (msec) : 100=0.01% 00:34:21.844 cpu : usr=4.28%, sys=5.08%, ctx=573, majf=0, minf=1 00:34:21.844 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:21.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.845 issued rwts: total=4096,4481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.845 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.846 job3: (groupid=0, jobs=1): err= 0: pid=2702850: Mon Dec 9 10:43:31 2024 00:34:21.846 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1005msec) 00:34:21.846 slat (usec): min=2, max=14951, avg=155.01, stdev=938.72 00:34:21.846 clat (usec): min=2189, max=61453, avg=20163.53, stdev=9380.20 00:34:21.847 lat (usec): min=3341, max=61469, avg=20318.55, stdev=9441.14 00:34:21.847 clat percentiles (usec): 00:34:21.847 | 1.00th=[ 8979], 5.00th=[11600], 10.00th=[12256], 20.00th=[12911], 00:34:21.847 | 30.00th=[13435], 40.00th=[13960], 50.00th=[16188], 60.00th=[18482], 00:34:21.848 | 70.00th=[25560], 80.00th=[28181], 90.00th=[31327], 95.00th=[39060], 00:34:21.848 | 99.00th=[51643], 99.50th=[51643], 99.90th=[58983], 99.95th=[58983], 00:34:21.848 | 99.99th=[61604] 00:34:21.848 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:34:21.849 slat (usec): min=3, max=21969, avg=158.89, stdev=1052.81 00:34:21.849 clat (usec): min=5509, max=88869, avg=21263.73, stdev=15088.09 00:34:21.849 lat (usec): min=7839, max=88877, avg=21422.62, stdev=15177.37 00:34:21.849 clat percentiles (usec): 00:34:21.850 | 1.00th=[ 8717], 5.00th=[10814], 10.00th=[12256], 20.00th=[12911], 00:34:21.850 | 30.00th=[13173], 40.00th=[13566], 50.00th=[14484], 60.00th=[16450], 00:34:21.850 | 70.00th=[17957], 80.00th=[31589], 90.00th=[40633], 95.00th=[45876], 00:34:21.850 | 99.00th=[86508], 99.50th=[87557], 99.90th=[88605], 99.95th=[88605], 00:34:21.851 | 99.99th=[88605] 00:34:21.851 bw ( KiB/s): min= 8192, max=16384, per=18.84%, avg=12288.00, stdev=5792.62, samples=2 00:34:21.851 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:34:21.852 lat (msec) : 4=0.20%, 10=2.56%, 20=64.68%, 50=29.46%, 100=3.10% 00:34:21.852 cpu : usr=2.49%, sys=3.69%, ctx=266, majf=0, minf=1 00:34:21.852 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:34:21.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.853 issued rwts: total=3054,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.853 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.853 00:34:21.853 Run status group 0 (all jobs): 00:34:21.854 READ: bw=59.6MiB/s (62.5MB/s), 11.9MiB/s-17.9MiB/s (12.4MB/s-18.8MB/s), io=59.9MiB (62.8MB), run=1004-1006msec 00:34:21.854 WRITE: bw=63.7MiB/s (66.8MB/s), 11.9MiB/s-18.9MiB/s (12.5MB/s-19.8MB/s), io=64.1MiB (67.2MB), run=1004-1006msec 00:34:21.854 00:34:21.854 Disk stats (read/write): 00:34:21.855 nvme0n1: ios=3634/4056, merge=0/0, ticks=50880/54591, in_queue=105471, util=87.68% 00:34:21.855 nvme0n2: ios=3104/3079, merge=0/0, ticks=50270/54955, in_queue=105225, util=87.72% 00:34:21.856 nvme0n3: ios=3642/3871, merge=0/0, ticks=17758/24699, in_queue=42457, util=98.33% 00:34:21.856 nvme0n4: ios=2614/2859, merge=0/0, ticks=16931/18790, in_queue=35721, util=98.43% 00:34:21.857 10:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:21.857 [global] 00:34:21.857 thread=1 00:34:21.857 invalidate=1 00:34:21.857 rw=randwrite 00:34:21.857 time_based=1 00:34:21.857 runtime=1 00:34:21.857 ioengine=libaio 00:34:21.857 direct=1 00:34:21.857 bs=4096 00:34:21.857 iodepth=128 00:34:21.857 norandommap=0 00:34:21.857 numjobs=1 00:34:21.857 00:34:21.858 verify_dump=1 00:34:21.858 verify_backlog=512 00:34:21.858 verify_state_save=0 00:34:21.858 do_verify=1 00:34:21.858 verify=crc32c-intel 00:34:21.858 [job0] 00:34:21.858 filename=/dev/nvme0n1 00:34:21.858 [job1] 00:34:21.858 filename=/dev/nvme0n2 00:34:21.858 [job2] 00:34:21.858 filename=/dev/nvme0n3 00:34:21.858 [job3] 00:34:21.858 filename=/dev/nvme0n4 00:34:21.859 Could not set queue depth (nvme0n1) 00:34:21.859 Could not set queue depth (nvme0n2) 00:34:21.859 Could not set queue depth (nvme0n3) 00:34:21.859 Could not set queue depth (nvme0n4) 00:34:21.859 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.860 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.860 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.861 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:21.861 fio-3.35 00:34:21.861 Starting 4 threads 00:34:21.861 00:34:21.861 job0: (groupid=0, jobs=1): err= 0: pid=2703075: Mon Dec 9 10:43:32 2024 00:34:21.861 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:34:21.862 slat (nsec): min=1905, max=53868k, avg=197305.38, stdev=1829562.62 00:34:21.862 clat (msec): min=4, max=103, avg=23.50, stdev=16.16 00:34:21.862 lat (msec): min=4, max=107, avg=23.70, stdev=16.32 00:34:21.862 clat percentiles (msec): 00:34:21.863 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:34:21.863 | 30.00th=[ 12], 40.00th=[ 15], 50.00th=[ 20], 60.00th=[ 24], 00:34:21.864 | 70.00th=[ 29], 80.00th=[ 34], 90.00th=[ 41], 95.00th=[ 55], 00:34:21.864 | 99.00th=[ 104], 99.50th=[ 104], 99.90th=[ 104], 99.95th=[ 104], 00:34:21.864 | 99.99th=[ 104] 00:34:21.864 write: IOPS=3329, BW=13.0MiB/s (13.6MB/s)(13.1MiB/1005msec); 0 zone resets 00:34:21.865 slat (usec): min=2, max=8411, avg=99.18, stdev=652.78 00:34:21.865 clat (usec): min=1575, max=110792, avg=16362.58, stdev=15858.58 00:34:21.865 lat (msec): min=3, max=110, avg=16.46, stdev=15.86 00:34:21.865 clat percentiles (msec): 00:34:21.866 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 10], 00:34:21.866 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 14], 60.00th=[ 15], 00:34:21.866 | 70.00th=[ 17], 80.00th=[ 17], 90.00th=[ 23], 95.00th=[ 39], 00:34:21.867 | 99.00th=[ 108], 99.50th=[ 111], 99.90th=[ 111], 99.95th=[ 111], 00:34:21.867 | 99.99th=[ 111] 00:34:21.867 bw ( KiB/s): min= 8192, max=17552, per=22.60%, avg=12872.00, stdev=6618.52, samples=2 00:34:21.868 iops : min= 2048, max= 4388, avg=3218.00, stdev=1654.63, samples=2 00:34:21.868 lat (msec) : 2=0.02%, 4=0.09%, 10=25.83%, 20=44.36%, 50=23.98% 00:34:21.868 lat (msec) : 100=4.24%, 250=1.48% 00:34:21.868 cpu : usr=1.59%, sys=3.29%, ctx=190, majf=0, minf=2 00:34:21.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:34:21.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.870 issued rwts: total=3072,3346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.870 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.870 job1: (groupid=0, jobs=1): err= 0: pid=2703076: Mon Dec 9 10:43:32 2024 00:34:21.870 read: IOPS=3426, BW=13.4MiB/s (14.0MB/s)(13.5MiB/1005msec) 00:34:21.871 slat (usec): min=2, max=14756, avg=123.28, stdev=858.31 00:34:21.871 clat (usec): min=2329, max=60280, avg=14566.54, stdev=6945.01 00:34:21.871 lat (usec): min=5261, max=60287, avg=14689.82, stdev=7017.51 00:34:21.871 clat percentiles (usec): 00:34:21.872 | 1.00th=[ 6980], 5.00th=[ 8586], 10.00th=[10028], 20.00th=[10290], 00:34:21.872 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12387], 60.00th=[13960], 00:34:21.872 | 70.00th=[15795], 80.00th=[17171], 90.00th=[20579], 95.00th=[24249], 00:34:21.873 | 99.00th=[52167], 99.50th=[56361], 99.90th=[60031], 99.95th=[60031], 00:34:21.873 | 99.99th=[60031] 00:34:21.873 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:34:21.873 slat (usec): min=5, max=11424, avg=152.54, stdev=768.11 00:34:21.874 clat (usec): min=1275, max=97388, avg=21547.87, stdev=16636.12 00:34:21.874 lat (usec): min=1284, max=97397, avg=21700.41, stdev=16744.75 00:34:21.874 clat percentiles (usec): 00:34:21.874 | 1.00th=[ 3130], 5.00th=[ 6849], 10.00th=[ 8094], 20.00th=[10945], 00:34:21.875 | 30.00th=[11994], 40.00th=[12911], 50.00th=[18220], 60.00th=[20579], 00:34:21.875 | 70.00th=[23987], 80.00th=[29230], 90.00th=[32900], 95.00th=[58459], 00:34:21.875 | 99.00th=[89654], 99.50th=[95945], 99.90th=[96994], 99.95th=[96994], 00:34:21.875 | 99.99th=[96994] 00:34:21.876 bw ( KiB/s): min= 9232, max=19440, per=25.17%, avg=14336.00, stdev=7218.15, samples=2 00:34:21.876 iops : min= 2308, max= 4860, avg=3584.00, stdev=1804.54, samples=2 00:34:21.876 lat (msec) : 2=0.27%, 4=0.73%, 10=12.38%, 20=58.62%, 50=23.86% 00:34:21.876 lat (msec) : 100=4.14% 00:34:21.877 cpu : usr=2.99%, sys=4.68%, ctx=399, majf=0, minf=1 00:34:21.877 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:34:21.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.878 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.878 issued rwts: total=3444,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.878 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.879 job2: (groupid=0, jobs=1): err= 0: pid=2703077: Mon Dec 9 10:43:32 2024 00:34:21.879 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:34:21.879 slat (usec): min=2, max=34883, avg=145.56, stdev=1086.56 00:34:21.880 clat (usec): min=4823, max=54752, avg=17682.23, stdev=8588.43 00:34:21.880 lat (usec): min=4829, max=54757, avg=17827.79, stdev=8632.64 00:34:21.880 clat percentiles (usec): 00:34:21.880 | 1.00th=[ 9372], 5.00th=[10552], 10.00th=[11207], 20.00th=[11863], 00:34:21.881 | 30.00th=[12518], 40.00th=[15008], 50.00th=[15270], 60.00th=[16581], 00:34:21.881 | 70.00th=[17695], 80.00th=[20841], 90.00th=[26346], 95.00th=[38011], 00:34:21.881 | 99.00th=[54789], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:34:21.882 | 99.99th=[54789] 00:34:21.882 write: IOPS=3386, BW=13.2MiB/s (13.9MB/s)(13.3MiB/1004msec); 0 zone resets 00:34:21.882 slat (usec): min=3, max=16067, avg=153.94, stdev=923.67 00:34:21.883 clat (usec): min=935, max=55209, avg=21503.70, stdev=12418.12 00:34:21.883 lat (usec): min=945, max=56112, avg=21657.64, stdev=12462.43 00:34:21.883 clat percentiles (usec): 00:34:21.883 | 1.00th=[ 3818], 5.00th=[ 8225], 10.00th=[10814], 20.00th=[11863], 00:34:21.884 | 30.00th=[12256], 40.00th=[15664], 50.00th=[18220], 60.00th=[20317], 00:34:21.884 | 70.00th=[23987], 80.00th=[30016], 90.00th=[42730], 95.00th=[51119], 00:34:21.884 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:34:21.885 | 99.99th=[55313] 00:34:21.885 bw ( KiB/s): min= 9792, max=16416, per=23.01%, avg=13104.00, stdev=4683.88, samples=2 00:34:21.885 iops : min= 2448, max= 4104, avg=3276.00, stdev=1170.97, samples=2 00:34:21.886 lat (usec) : 1000=0.03% 00:34:21.886 lat (msec) : 4=0.51%, 10=5.04%, 20=62.90%, 50=27.32%, 100=4.20% 00:34:21.886 cpu : usr=3.29%, sys=4.99%, ctx=279, majf=0, minf=2 00:34:21.887 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:34:21.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.888 issued rwts: total=3072,3400,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.888 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.889 job3: (groupid=0, jobs=1): err= 0: pid=2703078: Mon Dec 9 10:43:32 2024 00:34:21.889 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:34:21.889 slat (usec): min=3, max=8951, avg=119.42, stdev=715.33 00:34:21.889 clat (usec): min=8772, max=27747, avg=15902.87, stdev=3547.80 00:34:21.890 lat (usec): min=8779, max=29763, avg=16022.29, stdev=3594.51 00:34:21.890 clat percentiles (usec): 00:34:21.890 | 1.00th=[ 9634], 5.00th=[10683], 10.00th=[11207], 20.00th=[12649], 00:34:21.891 | 30.00th=[13566], 40.00th=[14484], 50.00th=[15926], 60.00th=[16450], 00:34:21.891 | 70.00th=[18220], 80.00th=[19268], 90.00th=[21103], 95.00th=[21890], 00:34:21.891 | 99.00th=[23725], 99.50th=[25035], 99.90th=[27657], 99.95th=[27657], 00:34:21.892 | 99.99th=[27657] 00:34:21.892 write: IOPS=3970, BW=15.5MiB/s (16.3MB/s)(15.6MiB/1006msec); 0 zone resets 00:34:21.892 slat (usec): min=3, max=7229, avg=134.33, stdev=671.29 00:34:21.893 clat (usec): min=4988, max=35958, avg=17524.16, stdev=6503.38 00:34:21.893 lat (usec): min=5761, max=35983, avg=17658.49, stdev=6559.48 00:34:21.893 clat percentiles (usec): 00:34:21.893 | 1.00th=[10552], 5.00th=[11207], 10.00th=[11338], 20.00th=[12256], 00:34:21.894 | 30.00th=[12911], 40.00th=[13960], 50.00th=[15139], 60.00th=[16909], 00:34:21.894 | 70.00th=[19006], 80.00th=[23987], 90.00th=[29230], 95.00th=[30278], 00:34:21.895 | 99.00th=[33424], 99.50th=[33817], 99.90th=[34866], 99.95th=[35914], 00:34:21.895 | 99.99th=[35914] 00:34:21.895 bw ( KiB/s): min=14544, max=16384, per=27.15%, avg=15464.00, stdev=1301.08, samples=2 00:34:21.896 iops : min= 3636, max= 4096, avg=3866.00, stdev=325.27, samples=2 00:34:21.896 lat (msec) : 10=1.77%, 20=80.27%, 50=17.96% 00:34:21.896 cpu : usr=4.88%, sys=7.06%, ctx=335, majf=0, minf=2 00:34:21.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:21.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.897 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:21.898 issued rwts: total=3584,3994,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.898 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:21.898 00:34:21.898 Run status group 0 (all jobs): 00:34:21.899 READ: bw=51.1MiB/s (53.6MB/s), 11.9MiB/s-13.9MiB/s (12.5MB/s-14.6MB/s), io=51.5MiB (54.0MB), run=1004-1006msec 00:34:21.899 WRITE: bw=55.6MiB/s (58.3MB/s), 13.0MiB/s-15.5MiB/s (13.6MB/s-16.3MB/s), io=56.0MiB (58.7MB), run=1004-1006msec 00:34:21.899 00:34:21.899 Disk stats (read/write): 00:34:21.900 nvme0n1: ios=2197/2560, merge=0/0, ticks=28334/16036, in_queue=44370, util=87.17% 00:34:21.900 nvme0n2: ios=3116/3151, merge=0/0, ticks=44497/56180, in_queue=100677, util=100.00% 00:34:21.901 nvme0n3: ios=2623/3072, merge=0/0, ticks=20621/25797, in_queue=46418, util=88.96% 00:34:21.901 nvme0n4: ios=3156/3584, merge=0/0, ticks=20359/25214, in_queue=45573, util=89.61% 00:34:21.902 10:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:21.902 10:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2703213 00:34:21.903 10:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:21.903 10:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:21.904 [global] 00:34:21.904 thread=1 00:34:21.904 invalidate=1 00:34:21.904 rw=read 00:34:21.904 time_based=1 00:34:21.904 runtime=10 00:34:21.904 ioengine=libaio 00:34:21.904 direct=1 00:34:21.904 bs=4096 00:34:21.904 iodepth=1 00:34:21.904 norandommap=1 00:34:21.904 numjobs=1 00:34:21.904 00:34:21.904 [job0] 00:34:21.904 filename=/dev/nvme0n1 00:34:21.904 [job1] 00:34:21.905 filename=/dev/nvme0n2 00:34:21.905 [job2] 00:34:21.905 filename=/dev/nvme0n3 00:34:21.905 [job3] 00:34:21.905 filename=/dev/nvme0n4 00:34:21.905 Could not set queue depth (nvme0n1) 00:34:21.905 Could not set queue depth (nvme0n2) 00:34:21.905 Could not set queue depth (nvme0n3) 00:34:21.906 Could not set queue depth (nvme0n4) 00:34:21.906 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.906 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.907 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.907 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:21.907 fio-3.35 00:34:21.908 Starting 4 threads 00:34:21.908 10:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:21.909 10:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:21.910 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=17788928, buflen=4096 00:34:21.910 fio: pid=2703333, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:21.911 10:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:21.911 10:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:21.912 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44511232, buflen=4096 00:34:21.912 fio: pid=2703327, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:21.913 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45600768, buflen=4096 00:34:21.913 fio: pid=2703309, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:21.914 10:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:21.914 10:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:21.915 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=37527552, buflen=4096 00:34:21.915 fio: pid=2703312, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:21.916 10:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:21.917 10:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:21.917 00:34:21.917 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2703309: Mon Dec 9 10:43:36 2024 00:34:21.918 read: IOPS=3129, BW=12.2MiB/s (12.8MB/s)(43.5MiB/3558msec) 00:34:21.918 slat (usec): min=4, max=15624, avg=14.82, stdev=257.02 00:34:21.918 clat (usec): min=171, max=41954, avg=301.16, stdev=1116.57 00:34:21.918 lat (usec): min=180, max=41969, avg=315.99, stdev=1146.10 00:34:21.919 clat percentiles (usec): 00:34:21.919 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:34:21.919 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 262], 00:34:21.920 | 70.00th=[ 277], 80.00th=[ 306], 90.00th=[ 355], 95.00th=[ 375], 00:34:21.920 | 99.00th=[ 494], 99.50th=[ 529], 99.90th=[ 1090], 99.95th=[41157], 00:34:21.920 | 99.99th=[41681] 00:34:21.921 bw ( KiB/s): min=10112, max=15912, per=36.50%, avg=13401.33, stdev=1932.69, samples=6 00:34:21.921 iops : min= 2528, max= 3978, avg=3350.33, stdev=483.17, samples=6 00:34:21.921 lat (usec) : 250=47.85%, 500=51.23%, 750=0.75%, 1000=0.05% 00:34:21.922 lat (msec) : 2=0.03%, 50=0.08% 00:34:21.922 cpu : usr=1.77%, sys=5.17%, ctx=11138, majf=0, minf=1 00:34:21.922 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.923 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.924 issued rwts: total=11134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.925 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2703312: Mon Dec 9 10:43:36 2024 00:34:21.925 read: IOPS=2368, BW=9475KiB/s (9702kB/s)(35.8MiB/3868msec) 00:34:21.925 slat (usec): min=4, max=18710, avg=16.57, stdev=332.48 00:34:21.926 clat (usec): min=193, max=41196, avg=400.89, stdev=2302.21 00:34:21.926 lat (usec): min=200, max=41201, avg=417.45, stdev=2330.37 00:34:21.926 clat percentiles (usec): 00:34:21.927 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 229], 00:34:21.927 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 265], 00:34:21.927 | 70.00th=[ 273], 80.00th=[ 289], 90.00th=[ 322], 95.00th=[ 396], 00:34:21.928 | 99.00th=[ 498], 99.50th=[ 693], 99.90th=[41157], 99.95th=[41157], 00:34:21.928 | 99.99th=[41157] 00:34:21.928 bw ( KiB/s): min= 96, max=14448, per=24.72%, avg=9078.00, stdev=6178.22, samples=7 00:34:21.929 iops : min= 24, max= 3612, avg=2269.29, stdev=1544.68, samples=7 00:34:21.929 lat (usec) : 250=46.61%, 500=52.43%, 750=0.49%, 1000=0.07% 00:34:21.930 lat (msec) : 2=0.01%, 4=0.01%, 10=0.03%, 20=0.01%, 50=0.33% 00:34:21.930 cpu : usr=0.88%, sys=2.95%, ctx=9170, majf=0, minf=2 00:34:21.930 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.931 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.932 issued rwts: total=9163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.933 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2703327: Mon Dec 9 10:43:36 2024 00:34:21.933 read: IOPS=3287, BW=12.8MiB/s (13.5MB/s)(42.4MiB/3306msec) 00:34:21.933 slat (nsec): min=4295, max=77630, avg=11010.68, stdev=6153.16 00:34:21.934 clat (usec): min=186, max=41326, avg=288.66, stdev=785.45 00:34:21.934 lat (usec): min=192, max=41341, avg=299.67, stdev=785.72 00:34:21.934 clat percentiles (usec): 00:34:21.935 | 1.00th=[ 210], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 227], 00:34:21.935 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:34:21.936 | 70.00th=[ 273], 80.00th=[ 293], 90.00th=[ 388], 95.00th=[ 469], 00:34:21.936 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 603], 99.95th=[ 660], 00:34:21.936 | 99.99th=[41157] 00:34:21.937 bw ( KiB/s): min=10952, max=15584, per=36.77%, avg=13500.17, stdev=1818.98, samples=6 00:34:21.937 iops : min= 2738, max= 3896, avg=3375.00, stdev=454.70, samples=6 00:34:21.937 lat (usec) : 250=56.47%, 500=41.36%, 750=2.13% 00:34:21.938 lat (msec) : 50=0.04% 00:34:21.938 cpu : usr=1.85%, sys=3.78%, ctx=10868, majf=0, minf=2 00:34:21.938 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.939 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.939 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.940 issued rwts: total=10868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.940 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.941 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2703333: Mon Dec 9 10:43:36 2024 00:34:21.941 read: IOPS=1456, BW=5824KiB/s (5963kB/s)(17.0MiB/2983msec) 00:34:21.942 slat (nsec): min=4593, max=53779, avg=9302.22, stdev=4625.18 00:34:21.942 clat (usec): min=198, max=41247, avg=672.73, stdev=3595.37 00:34:21.942 lat (usec): min=204, max=41252, avg=682.03, stdev=3596.18 00:34:21.942 clat percentiles (usec): 00:34:21.943 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 243], 20.00th=[ 285], 00:34:21.943 | 30.00th=[ 306], 40.00th=[ 322], 50.00th=[ 343], 60.00th=[ 367], 00:34:21.944 | 70.00th=[ 375], 80.00th=[ 412], 90.00th=[ 474], 95.00th=[ 515], 00:34:21.944 | 99.00th=[ 586], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:21.944 | 99.99th=[41157] 00:34:21.945 bw ( KiB/s): min= 176, max= 9864, per=15.36%, avg=5639.80, stdev=4987.52, samples=5 00:34:21.945 iops : min= 44, max= 2466, avg=1409.80, stdev=1247.09, samples=5 00:34:21.946 lat (usec) : 250=12.64%, 500=80.78%, 750=5.73% 00:34:21.946 lat (msec) : 4=0.02%, 50=0.81% 00:34:21.946 cpu : usr=0.54%, sys=2.48%, ctx=4345, majf=0, minf=1 00:34:21.947 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:21.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.948 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:21.948 issued rwts: total=4344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:21.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:21.948 00:34:21.948 Run status group 0 (all jobs): 00:34:21.949 READ: bw=35.9MiB/s (37.6MB/s), 5824KiB/s-12.8MiB/s (5963kB/s-13.5MB/s), io=139MiB (145MB), run=2983-3868msec 00:34:21.949 00:34:21.949 Disk stats (read/write): 00:34:21.950 nvme0n1: ios=10522/0, merge=0/0, ticks=3073/0, in_queue=3073, util=95.19% 00:34:21.950 nvme0n2: ios=9162/0, merge=0/0, ticks=3600/0, in_queue=3600, util=94.93% 00:34:21.950 nvme0n3: ios=10486/0, merge=0/0, ticks=2866/0, in_queue=2866, util=96.76% 00:34:21.951 nvme0n4: ios=4222/0, merge=0/0, ticks=2746/0, in_queue=2746, util=96.75% 00:34:21.952 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:21.953 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:21.954 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:21.955 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:21.955 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:21.956 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:21.957 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:21.958 10:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:21.958 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:21.959 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2703213 00:34:21.959 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:21.960 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:21.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:21.961 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:21.961 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:21.962 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:21.962 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:21.963 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:21.963 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:21.964 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:21.964 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:21.965 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:21.965 nvmf hotplug test: fio failed as expected 00:34:21.966 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:21.966 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:21.967 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:21.967 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:21.968 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:21.968 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:21.969 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:21.969 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:21.970 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:21.970 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:21.971 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:21.971 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:21.971 rmmod nvme_tcp 00:34:21.971 rmmod nvme_fabrics 00:34:21.971 rmmod nvme_keyring 00:34:21.972 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:21.972 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:21.973 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:21.973 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2701306 ']' 00:34:21.974 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2701306 00:34:21.974 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2701306 ']' 00:34:21.975 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2701306 00:34:21.975 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:21.976 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:21.976 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2701306 00:34:21.977 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:21.977 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:21.978 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2701306' 00:34:21.978 killing process with pid 2701306 00:34:21.979 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2701306 00:34:21.979 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2701306 00:34:21.979 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:21.980 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:21.980 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:21.981 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:21.981 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:21.982 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:21.982 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:21.983 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:21.983 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:21.983 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.984 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:21.984 10:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.985 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:21.985 00:34:21.985 real 0m24.006s 00:34:21.985 user 1m7.200s 00:34:21.985 sys 0m10.678s 00:34:21.985 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:21.986 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:21.986 ************************************ 00:34:21.986 END TEST nvmf_fio_target 00:34:21.986 ************************************ 00:34:21.987 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:21.987 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:21.987 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:21.988 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:21.988 ************************************ 00:34:21.988 START TEST nvmf_bdevio 00:34:21.988 ************************************ 00:34:21.989 10:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:21.989 * Looking for test storage... 00:34:21.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:21.990 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:21.990 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:34:21.990 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:21.991 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:21.991 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:21.991 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:21.992 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:21.992 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:21.992 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:21.993 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:21.993 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:21.993 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:21.994 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:21.994 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:21.994 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:21.995 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:21.995 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:21.995 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:21.996 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:21.996 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:21.996 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:21.997 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:21.997 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:21.997 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:21.998 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:21.998 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:21.998 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:21.999 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:21.999 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:21.999 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.000 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.000 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:22.001 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.001 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:22.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.001 --rc genhtml_branch_coverage=1 00:34:22.001 --rc genhtml_function_coverage=1 00:34:22.001 --rc genhtml_legend=1 00:34:22.001 --rc geninfo_all_blocks=1 00:34:22.002 --rc geninfo_unexecuted_blocks=1 00:34:22.002 00:34:22.002 ' 00:34:22.002 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:22.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.002 --rc genhtml_branch_coverage=1 00:34:22.002 --rc genhtml_function_coverage=1 00:34:22.002 --rc genhtml_legend=1 00:34:22.002 --rc geninfo_all_blocks=1 00:34:22.003 --rc geninfo_unexecuted_blocks=1 00:34:22.003 00:34:22.003 ' 00:34:22.003 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:22.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.003 --rc genhtml_branch_coverage=1 00:34:22.003 --rc genhtml_function_coverage=1 00:34:22.003 --rc genhtml_legend=1 00:34:22.003 --rc geninfo_all_blocks=1 00:34:22.004 --rc geninfo_unexecuted_blocks=1 00:34:22.004 00:34:22.004 ' 00:34:22.004 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:22.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.004 --rc genhtml_branch_coverage=1 00:34:22.004 --rc genhtml_function_coverage=1 00:34:22.004 --rc genhtml_legend=1 00:34:22.005 --rc geninfo_all_blocks=1 00:34:22.005 --rc geninfo_unexecuted_blocks=1 00:34:22.005 00:34:22.005 ' 00:34:22.005 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.005 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:22.006 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.006 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.006 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.007 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.007 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.007 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.008 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.008 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.008 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.009 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.009 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.010 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.010 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.010 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.011 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.011 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.012 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.012 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.012 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.013 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.013 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.015 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.017 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.018 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.019 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:22.020 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.021 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:22.021 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.021 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.021 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.022 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.055 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.056 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:22.056 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:22.056 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.057 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.057 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.057 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:22.058 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:22.058 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:22.058 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:22.059 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.059 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:22.060 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:22.060 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:22.060 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.061 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.061 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.062 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:22.062 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:22.063 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.063 10:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.064 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.064 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.065 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.065 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.065 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.066 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.066 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.067 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.067 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.067 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:22.068 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.068 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:22.069 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.069 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:22.070 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.070 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.071 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.071 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.072 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.072 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.073 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.074 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.074 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.075 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.075 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.076 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.076 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.077 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.077 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.078 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.078 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.078 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.079 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.080 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.080 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:22.080 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:22.081 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.081 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.082 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.082 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.082 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.083 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.083 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:22.083 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:22.084 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.084 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.084 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.085 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.085 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.086 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.086 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.086 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.087 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.087 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.088 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.088 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.088 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.089 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.089 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.090 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:22.090 Found net devices under 0000:09:00.0: cvl_0_0 00:34:22.090 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.091 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.091 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.091 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.094 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.095 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.095 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.095 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.096 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:22.096 Found net devices under 0000:09:00.1: cvl_0_1 00:34:22.096 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.097 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:22.097 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:22.097 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:22.098 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:22.098 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:22.100 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.100 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.101 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.101 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.102 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.102 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.102 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.103 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.103 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.104 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.104 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.105 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.105 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.105 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.106 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.106 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.107 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.107 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.108 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.109 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.109 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.110 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.110 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.111 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.111 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:34:22.111 00:34:22.111 --- 10.0.0.2 ping statistics --- 00:34:22.111 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.111 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:34:22.112 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:34:22.112 00:34:22.112 --- 10.0.0.1 ping statistics --- 00:34:22.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.113 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:34:22.113 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.114 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:22.114 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:22.115 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.115 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:22.116 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:22.116 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.116 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:22.117 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:22.117 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:22.118 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:22.118 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.119 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.119 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2706055 00:34:22.120 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:22.120 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2706055 00:34:22.121 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2706055 ']' 00:34:22.121 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.122 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.122 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.123 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.124 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.124 [2024-12-09 10:43:43.511556] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:22.124 [2024-12-09 10:43:43.512602] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:34:22.125 [2024-12-09 10:43:43.512655] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.126 [2024-12-09 10:43:43.584462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:22.126 [2024-12-09 10:43:43.644970] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.127 [2024-12-09 10:43:43.645033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.128 [2024-12-09 10:43:43.645062] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.128 [2024-12-09 10:43:43.645073] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.128 [2024-12-09 10:43:43.645083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.129 [2024-12-09 10:43:43.646878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:22.129 [2024-12-09 10:43:43.646941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:22.130 [2024-12-09 10:43:43.647007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:22.130 [2024-12-09 10:43:43.647010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:22.130 [2024-12-09 10:43:43.746317] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:22.131 [2024-12-09 10:43:43.746526] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:22.131 [2024-12-09 10:43:43.746825] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:22.132 [2024-12-09 10:43:43.747535] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:22.133 [2024-12-09 10:43:43.747753] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:22.133 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.133 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:22.134 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:22.134 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.134 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.135 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.149 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:22.149 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.150 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.150 [2024-12-09 10:43:43.803769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.150 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.150 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:22.151 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.151 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.151 Malloc0 00:34:22.151 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.151 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:22.152 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.152 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.152 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.153 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:22.153 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.153 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.154 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.154 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.155 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.155 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.156 [2024-12-09 10:43:43.875948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.156 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.156 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:22.157 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:22.157 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:22.157 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:22.158 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:22.158 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:22.158 { 00:34:22.158 "params": { 00:34:22.158 "name": "Nvme$subsystem", 00:34:22.158 "trtype": "$TEST_TRANSPORT", 00:34:22.158 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.158 "adrfam": "ipv4", 00:34:22.158 "trsvcid": "$NVMF_PORT", 00:34:22.158 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.159 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.159 "hdgst": ${hdgst:-false}, 00:34:22.159 "ddgst": ${ddgst:-false} 00:34:22.159 }, 00:34:22.159 "method": "bdev_nvme_attach_controller" 00:34:22.159 } 00:34:22.159 EOF 00:34:22.159 )") 00:34:22.159 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:22.159 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:22.160 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:22.160 10:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:22.160 "params": { 00:34:22.160 "name": "Nvme1", 00:34:22.160 "trtype": "tcp", 00:34:22.160 "traddr": "10.0.0.2", 00:34:22.160 "adrfam": "ipv4", 00:34:22.160 "trsvcid": "4420", 00:34:22.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:22.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:22.161 "hdgst": false, 00:34:22.161 "ddgst": false 00:34:22.161 }, 00:34:22.161 "method": "bdev_nvme_attach_controller" 00:34:22.161 }' 00:34:22.161 [2024-12-09 10:43:43.927808] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:34:22.162 [2024-12-09 10:43:43.927875] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2706085 ] 00:34:22.162 [2024-12-09 10:43:43.997084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:22.162 [2024-12-09 10:43:44.061124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.163 [2024-12-09 10:43:44.061177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.163 [2024-12-09 10:43:44.061182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.163 I/O targets: 00:34:22.163 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:22.163 00:34:22.163 00:34:22.163 CUnit - A unit testing framework for C - Version 2.1-3 00:34:22.164 http://cunit.sourceforge.net/ 00:34:22.164 00:34:22.164 00:34:22.164 Suite: bdevio tests on: Nvme1n1 00:34:22.164 Test: blockdev write read block ...passed 00:34:22.164 Test: blockdev write zeroes read block ...passed 00:34:22.164 Test: blockdev write zeroes read no split ...passed 00:34:22.164 Test: blockdev write zeroes read split ...passed 00:34:22.164 Test: blockdev write zeroes read split partial ...passed 00:34:22.165 Test: blockdev reset ...[2024-12-09 10:43:44.381894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:22.165 [2024-12-09 10:43:44.381996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18d28c0 (9): Bad file descriptor 00:34:22.166 [2024-12-09 10:43:44.476237] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:22.166 passed 00:34:22.166 Test: blockdev write read 8 blocks ...passed 00:34:22.166 Test: blockdev write read size > 128k ...passed 00:34:22.166 Test: blockdev write read invalid size ...passed 00:34:22.166 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:22.167 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:22.167 Test: blockdev write read max offset ...passed 00:34:22.167 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:22.167 Test: blockdev writev readv 8 blocks ...passed 00:34:22.167 Test: blockdev writev readv 30 x 1block ...passed 00:34:22.168 Test: blockdev writev readv block ...passed 00:34:22.168 Test: blockdev writev readv size > 128k ...passed 00:34:22.168 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:22.169 Test: blockdev comparev and writev ...[2024-12-09 10:43:44.690718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.169 [2024-12-09 10:43:44.690753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:22.170 [2024-12-09 10:43:44.690778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.170 [2024-12-09 10:43:44.690795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:22.171 [2024-12-09 10:43:44.691163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.171 [2024-12-09 10:43:44.691188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:22.172 [2024-12-09 10:43:44.691211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.172 [2024-12-09 10:43:44.691240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:22.172 [2024-12-09 10:43:44.691614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.173 [2024-12-09 10:43:44.691638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:22.173 [2024-12-09 10:43:44.691660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.174 [2024-12-09 10:43:44.691676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:22.174 [2024-12-09 10:43:44.692050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.175 [2024-12-09 10:43:44.692074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:22.175 [2024-12-09 10:43:44.692096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:22.176 [2024-12-09 10:43:44.692112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:22.176 passed 00:34:22.176 Test: blockdev nvme passthru rw ...passed 00:34:22.176 Test: blockdev nvme passthru vendor specific ...[2024-12-09 10:43:44.775404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:22.177 [2024-12-09 10:43:44.775431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:22.177 [2024-12-09 10:43:44.775588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:22.178 [2024-12-09 10:43:44.775612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:22.178 [2024-12-09 10:43:44.775757] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:22.179 [2024-12-09 10:43:44.775781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:22.179 [2024-12-09 10:43:44.775933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:22.179 [2024-12-09 10:43:44.775956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:22.179 passed 00:34:22.180 Test: blockdev nvme admin passthru ...passed 00:34:22.180 Test: blockdev copy ...passed 00:34:22.180 00:34:22.180 Run Summary: Type Total Ran Passed Failed Inactive 00:34:22.180 suites 1 1 n/a 0 0 00:34:22.180 tests 23 23 23 0 0 00:34:22.180 asserts 152 152 152 0 n/a 00:34:22.180 00:34:22.180 Elapsed time = 1.196 seconds 00:34:22.181 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:22.181 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.181 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.182 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.182 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:22.182 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:22.183 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:22.183 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:22.183 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:22.184 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:22.184 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:22.184 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:22.184 rmmod nvme_tcp 00:34:22.184 rmmod nvme_fabrics 00:34:22.184 rmmod nvme_keyring 00:34:22.185 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:22.185 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:22.185 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:22.186 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2706055 ']' 00:34:22.186 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2706055 00:34:22.186 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2706055 ']' 00:34:22.187 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2706055 00:34:22.187 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:22.187 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:22.188 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2706055 00:34:22.188 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:22.188 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:22.189 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2706055' 00:34:22.189 killing process with pid 2706055 00:34:22.189 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2706055 00:34:22.190 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2706055 00:34:22.190 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:22.190 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:22.190 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:22.191 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:22.191 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:22.191 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:22.192 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:22.192 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:22.192 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:22.193 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.193 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.193 10:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.194 10:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:22.194 00:34:22.194 real 0m6.549s 00:34:22.194 user 0m8.475s 00:34:22.194 sys 0m2.605s 00:34:22.194 10:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.194 10:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:22.194 ************************************ 00:34:22.194 END TEST nvmf_bdevio 00:34:22.195 ************************************ 00:34:22.195 10:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:22.195 00:34:22.195 real 3m57.087s 00:34:22.195 user 8m57.186s 00:34:22.195 sys 1m25.121s 00:34:22.195 10:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.196 10:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:22.196 ************************************ 00:34:22.196 END TEST nvmf_target_core_interrupt_mode 00:34:22.196 ************************************ 00:34:22.197 10:43:47 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:22.197 10:43:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:22.197 10:43:47 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.197 10:43:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.198 ************************************ 00:34:22.198 START TEST nvmf_interrupt 00:34:22.198 ************************************ 00:34:22.198 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:22.199 * Looking for test storage... 00:34:22.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:22.199 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:22.199 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:34:22.200 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:22.200 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:22.200 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.200 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.200 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.201 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.201 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.201 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.201 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.202 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.202 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.202 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.202 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.203 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:22.203 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:22.203 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.204 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.204 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:22.204 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:22.204 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.205 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:22.205 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.205 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:22.205 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:22.206 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.206 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:22.206 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.206 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.207 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.207 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:22.207 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.208 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:22.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.208 --rc genhtml_branch_coverage=1 00:34:22.208 --rc genhtml_function_coverage=1 00:34:22.208 --rc genhtml_legend=1 00:34:22.208 --rc geninfo_all_blocks=1 00:34:22.208 --rc geninfo_unexecuted_blocks=1 00:34:22.208 00:34:22.208 ' 00:34:22.209 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:22.209 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.209 --rc genhtml_branch_coverage=1 00:34:22.209 --rc genhtml_function_coverage=1 00:34:22.209 --rc genhtml_legend=1 00:34:22.209 --rc geninfo_all_blocks=1 00:34:22.209 --rc geninfo_unexecuted_blocks=1 00:34:22.209 00:34:22.209 ' 00:34:22.210 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:22.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.210 --rc genhtml_branch_coverage=1 00:34:22.210 --rc genhtml_function_coverage=1 00:34:22.210 --rc genhtml_legend=1 00:34:22.210 --rc geninfo_all_blocks=1 00:34:22.211 --rc geninfo_unexecuted_blocks=1 00:34:22.211 00:34:22.211 ' 00:34:22.211 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:22.211 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.211 --rc genhtml_branch_coverage=1 00:34:22.211 --rc genhtml_function_coverage=1 00:34:22.211 --rc genhtml_legend=1 00:34:22.212 --rc geninfo_all_blocks=1 00:34:22.212 --rc geninfo_unexecuted_blocks=1 00:34:22.212 00:34:22.212 ' 00:34:22.212 10:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.212 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:22.213 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.213 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.213 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.214 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.214 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.214 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.214 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.215 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.215 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.215 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.216 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.216 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.217 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.217 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.217 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.217 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.218 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.218 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.219 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.219 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.219 10:43:47 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.221 10:43:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.223 10:43:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.225 10:43:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.225 10:43:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:22.227 10:43:47 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.227 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:22.227 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.227 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.228 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.228 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.228 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.229 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:22.229 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:22.229 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.229 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.230 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.230 10:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:22.230 10:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:22.231 10:43:47 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:22.231 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:22.231 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.232 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:22.232 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:22.232 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:22.232 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.233 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:22.233 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.233 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:22.234 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:22.234 10:43:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.234 10:43:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:22.235 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.235 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.235 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.235 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.236 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.236 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.236 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.236 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.237 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.237 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:22.237 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.237 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:22.238 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.238 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:22.238 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.238 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.239 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.239 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.239 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.240 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.240 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.240 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.241 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.241 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.241 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.242 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.242 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.242 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.242 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.243 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.243 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.243 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.243 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.244 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.244 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:22.244 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:22.244 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.244 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.245 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.245 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.245 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.245 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.246 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:22.246 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:22.246 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.246 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.246 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.247 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.247 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.247 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.247 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.248 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.248 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.248 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.248 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.249 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.249 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.249 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.249 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.250 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:22.250 Found net devices under 0000:09:00.0: cvl_0_0 00:34:22.250 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.250 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.251 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.251 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.251 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.252 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.252 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.252 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.252 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:22.252 Found net devices under 0000:09:00.1: cvl_0_1 00:34:22.253 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.253 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:22.253 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:22.254 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:22.254 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:22.254 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:22.254 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.254 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.255 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.255 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.255 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.256 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.258 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.258 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.259 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.259 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.259 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.260 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.260 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.260 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.261 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.261 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.261 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.261 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.262 10:43:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.262 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.262 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.263 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.263 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:34:22.264 00:34:22.264 --- 10.0.0.2 ping statistics --- 00:34:22.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.264 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:34:22.264 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:34:22.264 00:34:22.265 --- 10.0.0.1 ping statistics --- 00:34:22.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.265 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:34:22.265 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.265 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:22.266 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:22.266 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.266 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:22.266 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:22.267 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.267 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:22.267 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:22.267 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:22.268 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:22.268 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.268 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:22.268 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2708282 00:34:22.269 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:22.269 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2708282 00:34:22.269 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2708282 ']' 00:34:22.270 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.270 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.270 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.271 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.271 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:22.271 [2024-12-09 10:43:50.092760] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:22.272 [2024-12-09 10:43:50.093906] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:34:22.272 [2024-12-09 10:43:50.093973] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.273 [2024-12-09 10:43:50.163544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:22.273 [2024-12-09 10:43:50.219844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.273 [2024-12-09 10:43:50.219919] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.274 [2024-12-09 10:43:50.219932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.274 [2024-12-09 10:43:50.219942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.274 [2024-12-09 10:43:50.219959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.275 [2024-12-09 10:43:50.221381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.275 [2024-12-09 10:43:50.221387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.275 [2024-12-09 10:43:50.311220] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:22.276 [2024-12-09 10:43:50.311240] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:22.276 [2024-12-09 10:43:50.311520] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:22.276 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.277 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:34:22.277 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:22.277 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.277 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:22.278 10:43:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.278 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:22.278 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:22.278 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:22.279 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:22.279 5000+0 records in 00:34:22.279 5000+0 records out 00:34:22.279 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0120103 s, 853 MB/s 00:34:22.280 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:22.280 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.280 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:22.280 AIO0 00:34:22.280 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.281 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:22.281 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.281 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:22.281 [2024-12-09 10:43:50.410025] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.282 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.282 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:22.282 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.283 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:22.283 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.283 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:22.283 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.284 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:22.284 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.284 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.285 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.285 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:22.285 [2024-12-09 10:43:50.434250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.285 10:43:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.286 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:22.286 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2708282 0 00:34:22.286 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2708282 0 idle 00:34:22.287 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2708282 00:34:22.287 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:22.287 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:22.287 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:22.287 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.288 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:22.288 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:22.288 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.288 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.289 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.289 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2708282 -w 256 00:34:22.289 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:22.290 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2708282 root 20 0 128.2g 48000 35328 S 0.0 0.1 0:00.27 reactor_0' 00:34:22.290 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2708282 root 20 0 128.2g 48000 35328 S 0.0 0.1 0:00.27 reactor_0 00:34:22.290 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.290 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.291 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:22.291 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:22.291 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:22.291 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:22.292 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:22.292 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.292 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:22.292 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2708282 1 00:34:22.293 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2708282 1 idle 00:34:22.293 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2708282 00:34:22.293 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:22.294 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:22.294 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:22.294 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.294 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:22.294 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:22.295 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.295 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.295 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.295 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2708282 -w 256 00:34:22.296 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:22.296 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2708296 root 20 0 128.2g 48000 35328 S 0.0 0.1 0:00.00 reactor_1' 00:34:22.297 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2708296 root 20 0 128.2g 48000 35328 S 0.0 0.1 0:00.00 reactor_1 00:34:22.297 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.297 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.297 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:22.297 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:22.298 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:22.298 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:22.298 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:22.298 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.299 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:22.299 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2708336 00:34:22.300 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:22.300 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:22.300 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:22.301 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2708282 0 00:34:22.301 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2708282 0 busy 00:34:22.301 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2708282 00:34:22.301 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:22.302 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:22.302 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:22.302 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.302 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:22.303 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.303 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.303 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.303 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2708282 -w 256 00:34:22.304 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:22.304 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2708282 root 20 0 128.2g 48768 35328 R 80.0 0.1 0:00.39 reactor_0' 00:34:22.304 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2708282 root 20 0 128.2g 48768 35328 R 80.0 0.1 0:00.39 reactor_0 00:34:22.305 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.305 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.305 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=80.0 00:34:22.305 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=80 00:34:22.306 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:22.306 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:22.306 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:22.306 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.307 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:22.307 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:22.307 10:43:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2708282 1 00:34:22.307 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2708282 1 busy 00:34:22.308 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2708282 00:34:22.308 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:22.308 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:22.308 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:22.309 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.309 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:22.309 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.309 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.310 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.310 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2708282 -w 256 00:34:22.310 10:43:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:22.311 10:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2708296 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:00.23 reactor_1' 00:34:22.311 10:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2708296 root 20 0 128.2g 48768 35328 R 99.9 0.1 0:00.23 reactor_1 00:34:22.311 10:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.312 10:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.312 10:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:22.312 10:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:22.312 10:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:22.313 10:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:22.313 10:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:22.313 10:43:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.314 10:43:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2708336 00:34:22.314 Initializing NVMe Controllers 00:34:22.314 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:22.314 Controller IO queue size 256, less than required. 00:34:22.314 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:22.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:22.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:22.315 Initialization complete. Launching workers. 00:34:22.315 ======================================================== 00:34:22.316 Latency(us) 00:34:22.316 Device Information : IOPS MiB/s Average min max 00:34:22.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13849.17 54.10 18496.43 4219.17 22831.04 00:34:22.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13625.27 53.22 18801.16 3949.35 23106.67 00:34:22.320 ======================================================== 00:34:22.320 Total : 27474.44 107.32 18647.55 3949.35 23106.67 00:34:22.320 00:34:22.321 10:44:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:22.321 10:44:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2708282 0 00:34:22.321 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2708282 0 idle 00:34:22.322 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2708282 00:34:22.322 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:22.322 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:22.322 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:22.323 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.323 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:22.323 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:22.324 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.324 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.324 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.324 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2708282 -w 256 00:34:22.325 10:44:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:22.325 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2708282 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.21 reactor_0' 00:34:22.326 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2708282 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:20.21 reactor_0 00:34:22.326 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.326 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.326 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:22.327 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:22.327 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:22.327 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:22.328 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:22.328 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.328 10:44:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:22.328 10:44:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2708282 1 00:34:22.329 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2708282 1 idle 00:34:22.329 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2708282 00:34:22.329 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:22.329 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:22.330 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:22.330 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.330 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:22.331 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:22.331 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.331 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.331 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.332 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2708282 -w 256 00:34:22.332 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:22.332 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2708296 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.98 reactor_1' 00:34:22.333 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2708296 root 20 0 128.2g 48768 35328 S 0.0 0.1 0:09.98 reactor_1 00:34:22.333 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.333 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.334 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:22.334 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:22.334 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:22.334 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:22.335 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:22.335 10:44:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.336 10:44:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:22.336 10:44:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:22.336 10:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:34:22.336 10:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:22.337 10:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:22.337 10:44:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:34:22.337 10:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:22.337 10:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:22.338 10:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:22.338 10:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:22.338 10:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:22.339 10:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:34:22.339 10:44:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:22.339 10:44:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2708282 0 00:34:22.340 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2708282 0 idle 00:34:22.340 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2708282 00:34:22.340 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:22.341 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:22.341 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:22.341 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.341 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:22.342 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:22.342 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.342 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.342 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.342 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2708282 -w 256 00:34:22.343 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:22.343 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2708282 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.31 reactor_0' 00:34:22.343 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2708282 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:20.31 reactor_0 00:34:22.344 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.344 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.344 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:22.344 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:22.345 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:22.345 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:22.345 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:22.345 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.345 10:44:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:22.346 10:44:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2708282 1 00:34:22.346 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2708282 1 idle 00:34:22.346 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2708282 00:34:22.346 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:22.346 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:22.347 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:22.347 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:22.347 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:22.347 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:22.347 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:22.348 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:22.348 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:22.348 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2708282 -w 256 00:34:22.348 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:22.348 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2708296 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1' 00:34:22.348 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2708296 root 20 0 128.2g 61056 35328 S 0.0 0.1 0:10.01 reactor_1 00:34:22.349 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:22.349 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:22.349 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:22.349 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:22.350 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:22.350 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:22.350 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:22.350 10:44:03 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:22.351 10:44:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:22.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:22.351 10:44:03 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:22.352 10:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:34:22.352 10:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:22.352 10:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:22.352 10:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:22.353 10:44:03 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:22.353 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:34:22.353 10:44:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:22.353 10:44:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:22.353 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:22.354 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:22.354 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:22.354 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:22.354 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:22.355 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:22.355 rmmod nvme_tcp 00:34:22.355 rmmod nvme_fabrics 00:34:22.355 rmmod nvme_keyring 00:34:22.355 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:22.355 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:22.355 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:22.356 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2708282 ']' 00:34:22.356 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2708282 00:34:22.356 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2708282 ']' 00:34:22.356 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2708282 00:34:22.356 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:34:22.357 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:22.357 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2708282 00:34:22.357 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:22.357 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:22.358 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2708282' 00:34:22.358 killing process with pid 2708282 00:34:22.358 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2708282 00:34:22.358 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2708282 00:34:22.358 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:22.358 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:22.359 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:22.359 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:22.359 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:22.359 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:22.359 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:22.359 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:22.360 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:22.360 10:44:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.360 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:22.360 10:44:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.360 10:44:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:22.360 00:34:22.360 real 0m18.896s 00:34:22.360 user 0m37.040s 00:34:22.360 sys 0m6.536s 00:34:22.384 10:44:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.384 10:44:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:22.384 ************************************ 00:34:22.384 END TEST nvmf_interrupt 00:34:22.384 ************************************ 00:34:22.384 00:34:22.384 real 25m5.744s 00:34:22.384 user 58m42.226s 00:34:22.384 sys 6m49.149s 00:34:22.384 10:44:06 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.384 10:44:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.384 ************************************ 00:34:22.384 END TEST nvmf_tcp 00:34:22.384 ************************************ 00:34:22.385 10:44:06 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:34:22.385 10:44:06 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:22.385 10:44:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:22.385 10:44:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.385 10:44:06 -- common/autotest_common.sh@10 -- # set +x 00:34:22.385 ************************************ 00:34:22.385 START TEST spdkcli_nvmf_tcp 00:34:22.385 ************************************ 00:34:22.386 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:22.386 * Looking for test storage... 00:34:22.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:22.386 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:22.386 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:34:22.387 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:22.387 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:22.387 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.387 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.387 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.387 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.387 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.388 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.388 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.388 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.388 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.388 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.388 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.389 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:22.389 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:22.389 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.389 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.389 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:22.389 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:22.389 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.390 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:22.390 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.390 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:22.390 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:22.390 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.390 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:22.390 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.391 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.391 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.391 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:22.391 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.391 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:22.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.392 --rc genhtml_branch_coverage=1 00:34:22.392 --rc genhtml_function_coverage=1 00:34:22.392 --rc genhtml_legend=1 00:34:22.392 --rc geninfo_all_blocks=1 00:34:22.392 --rc geninfo_unexecuted_blocks=1 00:34:22.392 00:34:22.392 ' 00:34:22.392 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:22.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.392 --rc genhtml_branch_coverage=1 00:34:22.392 --rc genhtml_function_coverage=1 00:34:22.392 --rc genhtml_legend=1 00:34:22.393 --rc geninfo_all_blocks=1 00:34:22.393 --rc geninfo_unexecuted_blocks=1 00:34:22.393 00:34:22.393 ' 00:34:22.393 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:22.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.393 --rc genhtml_branch_coverage=1 00:34:22.393 --rc genhtml_function_coverage=1 00:34:22.393 --rc genhtml_legend=1 00:34:22.393 --rc geninfo_all_blocks=1 00:34:22.393 --rc geninfo_unexecuted_blocks=1 00:34:22.393 00:34:22.393 ' 00:34:22.394 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:22.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.394 --rc genhtml_branch_coverage=1 00:34:22.394 --rc genhtml_function_coverage=1 00:34:22.394 --rc genhtml_legend=1 00:34:22.394 --rc geninfo_all_blocks=1 00:34:22.394 --rc geninfo_unexecuted_blocks=1 00:34:22.394 00:34:22.394 ' 00:34:22.394 10:44:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:22.395 10:44:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:22.395 10:44:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:22.395 10:44:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.395 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:22.396 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.396 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.396 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.396 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.396 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.396 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.397 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.397 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.397 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.397 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.397 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.398 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.398 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.398 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.398 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.398 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.399 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.399 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.399 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.399 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.400 10:44:06 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.400 10:44:06 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.401 10:44:06 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.402 10:44:06 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.403 10:44:06 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:22.403 10:44:06 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.404 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:22.404 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.404 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.404 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.404 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.404 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.405 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:22.405 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:22.405 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.405 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.405 10:44:06 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.406 10:44:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:22.406 10:44:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:22.406 10:44:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:22.406 10:44:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:22.406 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.406 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.406 10:44:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:22.407 10:44:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2710525 00:34:22.407 10:44:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:22.407 10:44:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2710525 00:34:22.407 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2710525 ']' 00:34:22.408 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.408 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.408 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.408 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.409 10:44:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.409 [2024-12-09 10:44:06.775891] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:34:22.409 [2024-12-09 10:44:06.775995] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2710525 ] 00:34:22.410 [2024-12-09 10:44:06.842425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:22.410 [2024-12-09 10:44:06.899540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.410 [2024-12-09 10:44:06.899544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.410 10:44:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.410 10:44:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:34:22.411 10:44:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:22.411 10:44:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.411 10:44:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.411 10:44:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:22.411 10:44:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:22.411 10:44:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:22.412 10:44:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.412 10:44:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.412 10:44:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:22.412 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:22.412 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:22.413 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:22.413 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:22.413 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:22.413 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:22.413 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:22.414 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:22.414 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:22.414 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:22.415 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:22.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:22.415 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:22.415 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:22.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:22.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:22.416 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:22.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:22.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:22.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:22.417 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:22.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:22.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:22.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:22.418 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:22.419 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:22.419 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:22.419 ' 00:34:22.419 [2024-12-09 10:44:09.662574] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.419 [2024-12-09 10:44:10.935053] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:22.419 [2024-12-09 10:44:13.278549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:22.420 [2024-12-09 10:44:15.300688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:22.420 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:22.420 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:22.420 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:22.420 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:22.421 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:22.421 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:22.421 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:22.421 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:22.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:22.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:22.422 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:22.423 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:22.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:22.423 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:22.423 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:22.424 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:22.424 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:22.424 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:22.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:22.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:22.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:22.425 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:22.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:22.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:22.426 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:22.427 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:22.427 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:22.427 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:22.427 10:44:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:22.427 10:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.427 10:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.428 10:44:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:22.428 10:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.428 10:44:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.428 10:44:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:22.428 10:44:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:22.429 10:44:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:22.429 10:44:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:22.430 10:44:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:22.430 10:44:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.430 10:44:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.430 10:44:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:22.430 10:44:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.430 10:44:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.431 10:44:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:22.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:22.431 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:22.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:22.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:22.432 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:22.432 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:22.433 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:22.433 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:22.433 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:22.433 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:22.433 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:22.433 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:22.433 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:22.433 ' 00:34:22.434 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:22.434 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:22.435 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:22.435 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:22.435 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:22.435 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:22.435 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:22.435 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:22.435 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:22.435 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:22.435 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:22.435 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:22.435 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:22.435 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2710525 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2710525 ']' 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2710525 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2710525 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:22.435 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2710525' 00:34:22.435 killing process with pid 2710525 00:34:22.436 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2710525 00:34:22.436 10:44:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2710525 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2710525 ']' 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2710525 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2710525 ']' 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2710525 00:34:22.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2710525) - No such process 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2710525 is not found' 00:34:22.436 Process with pid 2710525 is not found 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:22.436 00:34:22.436 real 0m16.560s 00:34:22.436 user 0m35.234s 00:34:22.436 sys 0m0.733s 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.436 10:44:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:22.436 ************************************ 00:34:22.436 END TEST spdkcli_nvmf_tcp 00:34:22.436 ************************************ 00:34:22.437 10:44:23 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:22.437 10:44:23 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:22.437 10:44:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.437 10:44:23 -- common/autotest_common.sh@10 -- # set +x 00:34:22.437 ************************************ 00:34:22.437 START TEST nvmf_identify_passthru 00:34:22.437 ************************************ 00:34:22.437 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:22.437 * Looking for test storage... 00:34:22.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:22.437 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:22.437 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:34:22.438 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:22.438 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:22.438 10:44:23 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.438 10:44:23 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.438 10:44:23 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.438 10:44:23 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.438 10:44:23 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.438 10:44:23 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.440 10:44:23 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.440 10:44:23 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.441 10:44:23 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.441 10:44:23 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.441 10:44:23 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.441 10:44:23 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:22.441 10:44:23 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:22.441 10:44:23 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.441 10:44:23 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.441 10:44:23 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:22.441 10:44:23 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:22.441 10:44:23 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.442 10:44:23 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:22.442 10:44:23 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.442 10:44:23 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:22.442 10:44:23 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:22.443 10:44:23 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.443 10:44:23 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:22.443 10:44:23 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.444 10:44:23 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.444 10:44:23 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.444 10:44:23 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:22.444 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.445 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:22.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.445 --rc genhtml_branch_coverage=1 00:34:22.445 --rc genhtml_function_coverage=1 00:34:22.445 --rc genhtml_legend=1 00:34:22.445 --rc geninfo_all_blocks=1 00:34:22.445 --rc geninfo_unexecuted_blocks=1 00:34:22.445 00:34:22.445 ' 00:34:22.445 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:22.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.445 --rc genhtml_branch_coverage=1 00:34:22.446 --rc genhtml_function_coverage=1 00:34:22.446 --rc genhtml_legend=1 00:34:22.446 --rc geninfo_all_blocks=1 00:34:22.446 --rc geninfo_unexecuted_blocks=1 00:34:22.446 00:34:22.446 ' 00:34:22.446 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:22.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.446 --rc genhtml_branch_coverage=1 00:34:22.446 --rc genhtml_function_coverage=1 00:34:22.446 --rc genhtml_legend=1 00:34:22.446 --rc geninfo_all_blocks=1 00:34:22.447 --rc geninfo_unexecuted_blocks=1 00:34:22.447 00:34:22.447 ' 00:34:22.447 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:22.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.447 --rc genhtml_branch_coverage=1 00:34:22.447 --rc genhtml_function_coverage=1 00:34:22.447 --rc genhtml_legend=1 00:34:22.447 --rc geninfo_all_blocks=1 00:34:22.447 --rc geninfo_unexecuted_blocks=1 00:34:22.447 00:34:22.447 ' 00:34:22.448 10:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.448 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:22.448 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.448 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.448 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.449 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.449 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.449 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.449 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.449 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.450 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.450 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.450 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.450 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.451 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.451 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.451 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.451 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.451 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.452 10:44:23 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.452 10:44:23 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.452 10:44:23 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.452 10:44:23 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.453 10:44:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.455 10:44:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.456 10:44:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.456 10:44:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:22.456 10:44:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.457 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:22.457 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.457 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.457 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.457 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.457 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.457 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:22.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:22.458 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.458 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.458 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.458 10:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.458 10:44:23 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.458 10:44:23 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.459 10:44:23 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.459 10:44:23 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.459 10:44:23 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.460 10:44:23 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.461 10:44:23 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.461 10:44:23 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:22.462 10:44:23 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.462 10:44:23 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:22.462 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:22.462 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.462 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:22.462 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:22.462 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:22.462 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.463 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:22.463 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.463 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:22.463 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:22.463 10:44:23 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.463 10:44:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.464 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.464 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.464 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.464 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.464 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.464 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.464 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.464 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.465 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.465 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:22.465 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.465 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:22.465 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.465 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:22.465 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.466 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.466 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.466 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.466 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.466 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.466 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.466 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.466 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.467 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.467 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.467 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.467 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.467 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.467 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.467 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.467 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.468 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.468 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.468 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.468 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:22.468 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:22.468 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.468 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.468 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.468 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.469 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.469 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.469 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:22.469 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:22.469 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.469 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.469 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.469 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.470 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.470 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.470 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.470 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.470 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.470 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.470 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.470 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.470 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.471 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.471 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.471 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:22.471 Found net devices under 0000:09:00.0: cvl_0_0 00:34:22.471 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.471 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.471 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.471 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.471 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.472 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.472 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.472 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.472 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:22.472 Found net devices under 0000:09:00.1: cvl_0_1 00:34:22.472 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.472 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:22.472 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:22.472 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:22.472 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:22.473 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:22.473 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.473 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.473 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.473 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.473 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.473 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.473 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.473 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.474 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.474 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.474 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.474 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.474 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.474 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.474 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.475 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.475 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.475 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.475 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.475 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.475 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.476 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.476 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:34:22.476 00:34:22.476 --- 10.0.0.2 ping statistics --- 00:34:22.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.476 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:34:22.476 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:34:22.476 00:34:22.476 --- 10.0.0.1 ping statistics --- 00:34:22.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.477 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:34:22.477 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.477 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:22.477 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:22.477 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.477 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:22.477 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:22.477 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.477 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:22.477 10:44:25 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:22.478 10:44:25 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:22.478 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.478 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.478 10:44:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:22.478 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:34:22.478 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:34:22.478 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:34:22.478 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:34:22.478 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:34:22.478 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:34:22.479 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:22.479 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:22.479 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:34:22.479 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:34:22.479 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:0b:00.0 00:34:22.479 10:44:25 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:0b:00.0 00:34:22.479 10:44:25 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:34:22.479 10:44:25 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:34:22.480 10:44:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:22.480 10:44:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:22.480 10:44:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:22.480 10:44:29 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:34:22.480 10:44:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:22.480 10:44:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:22.480 10:44:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:22.480 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:22.481 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:22.481 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.481 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.481 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:22.481 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.481 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.481 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2715161 00:34:22.481 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:22.482 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:22.482 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2715161 00:34:22.482 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2715161 ']' 00:34:22.482 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.482 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.482 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.482 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.483 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.483 [2024-12-09 10:44:34.098266] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:34:22.483 [2024-12-09 10:44:34.098364] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.483 [2024-12-09 10:44:34.172317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:22.483 [2024-12-09 10:44:34.230744] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.483 [2024-12-09 10:44:34.230798] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.484 [2024-12-09 10:44:34.230826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.484 [2024-12-09 10:44:34.230837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.484 [2024-12-09 10:44:34.230846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.484 [2024-12-09 10:44:34.234173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:22.484 [2024-12-09 10:44:34.234200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:22.484 [2024-12-09 10:44:34.234259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:22.484 [2024-12-09 10:44:34.234262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.484 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.484 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:34:22.485 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:22.485 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.485 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.485 INFO: Log level set to 20 00:34:22.485 INFO: Requests: 00:34:22.485 { 00:34:22.485 "jsonrpc": "2.0", 00:34:22.485 "method": "nvmf_set_config", 00:34:22.485 "id": 1, 00:34:22.485 "params": { 00:34:22.485 "admin_cmd_passthru": { 00:34:22.485 "identify_ctrlr": true 00:34:22.485 } 00:34:22.485 } 00:34:22.485 } 00:34:22.485 00:34:22.485 INFO: response: 00:34:22.485 { 00:34:22.485 "jsonrpc": "2.0", 00:34:22.485 "id": 1, 00:34:22.485 "result": true 00:34:22.485 } 00:34:22.485 00:34:22.485 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.485 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:22.486 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.486 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.486 INFO: Setting log level to 20 00:34:22.486 INFO: Setting log level to 20 00:34:22.486 INFO: Log level set to 20 00:34:22.486 INFO: Log level set to 20 00:34:22.486 INFO: Requests: 00:34:22.486 { 00:34:22.486 "jsonrpc": "2.0", 00:34:22.486 "method": "framework_start_init", 00:34:22.486 "id": 1 00:34:22.486 } 00:34:22.486 00:34:22.486 INFO: Requests: 00:34:22.486 { 00:34:22.486 "jsonrpc": "2.0", 00:34:22.486 "method": "framework_start_init", 00:34:22.486 "id": 1 00:34:22.486 } 00:34:22.486 00:34:22.486 [2024-12-09 10:44:34.453045] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:22.486 INFO: response: 00:34:22.486 { 00:34:22.486 "jsonrpc": "2.0", 00:34:22.486 "id": 1, 00:34:22.486 "result": true 00:34:22.486 } 00:34:22.486 00:34:22.486 INFO: response: 00:34:22.486 { 00:34:22.486 "jsonrpc": "2.0", 00:34:22.486 "id": 1, 00:34:22.486 "result": true 00:34:22.487 } 00:34:22.487 00:34:22.487 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.487 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:22.487 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.487 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.487 INFO: Setting log level to 40 00:34:22.487 INFO: Setting log level to 40 00:34:22.487 INFO: Setting log level to 40 00:34:22.487 [2024-12-09 10:44:34.463270] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.487 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.487 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:22.488 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.488 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.488 10:44:34 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:34:22.488 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.488 10:44:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.488 Nvme0n1 00:34:22.488 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.488 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:22.488 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.488 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.489 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.489 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:22.489 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.489 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.489 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.489 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:22.489 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.489 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.490 [2024-12-09 10:44:37.371829] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.490 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.490 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:22.490 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.490 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.490 [ 00:34:22.490 { 00:34:22.490 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:22.490 "subtype": "Discovery", 00:34:22.490 "listen_addresses": [], 00:34:22.490 "allow_any_host": true, 00:34:22.490 "hosts": [] 00:34:22.490 }, 00:34:22.490 { 00:34:22.490 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:22.490 "subtype": "NVMe", 00:34:22.490 "listen_addresses": [ 00:34:22.490 { 00:34:22.490 "trtype": "TCP", 00:34:22.490 "adrfam": "IPv4", 00:34:22.490 "traddr": "10.0.0.2", 00:34:22.490 "trsvcid": "4420" 00:34:22.490 } 00:34:22.490 ], 00:34:22.490 "allow_any_host": true, 00:34:22.490 "hosts": [], 00:34:22.491 "serial_number": "SPDK00000000000001", 00:34:22.491 "model_number": "SPDK bdev Controller", 00:34:22.491 "max_namespaces": 1, 00:34:22.491 "min_cntlid": 1, 00:34:22.491 "max_cntlid": 65519, 00:34:22.491 "namespaces": [ 00:34:22.491 { 00:34:22.491 "nsid": 1, 00:34:22.491 "bdev_name": "Nvme0n1", 00:34:22.491 "name": "Nvme0n1", 00:34:22.491 "nguid": "54BFCDB6BDE54EDCAC8E08BABAF2A8DF", 00:34:22.491 "uuid": "54bfcdb6-bde5-4edc-ac8e-08babaf2a8df" 00:34:22.491 } 00:34:22.491 ] 00:34:22.491 } 00:34:22.491 ] 00:34:22.491 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.491 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:22.491 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:22.491 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:22.492 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:34:22.492 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:22.492 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:22.492 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:22.492 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:22.492 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:34:22.492 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:22.492 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:22.492 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.493 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.493 10:44:37 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.493 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:22.493 10:44:37 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:22.493 10:44:37 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:22.493 10:44:37 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:22.493 10:44:37 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:22.493 10:44:37 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:22.493 10:44:37 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:22.493 10:44:37 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:22.493 rmmod nvme_tcp 00:34:22.493 rmmod nvme_fabrics 00:34:22.493 rmmod nvme_keyring 00:34:22.494 10:44:38 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:22.494 10:44:38 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:22.494 10:44:38 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:22.494 10:44:38 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2715161 ']' 00:34:22.494 10:44:38 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2715161 00:34:22.494 10:44:38 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2715161 ']' 00:34:22.494 10:44:38 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2715161 00:34:22.494 10:44:38 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:34:22.494 10:44:38 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:22.494 10:44:38 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2715161 00:34:22.494 10:44:38 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:22.494 10:44:38 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:22.495 10:44:38 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2715161' 00:34:22.495 killing process with pid 2715161 00:34:22.495 10:44:38 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2715161 00:34:22.495 10:44:38 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2715161 00:34:22.495 10:44:39 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:22.495 10:44:39 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:22.495 10:44:39 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:22.495 10:44:39 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:22.495 10:44:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:22.495 10:44:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:22.495 10:44:39 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:22.495 10:44:39 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:22.496 10:44:39 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:22.496 10:44:39 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.496 10:44:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:22.496 10:44:39 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.496 10:44:41 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:22.496 00:34:22.496 real 0m18.456s 00:34:22.496 user 0m27.120s 00:34:22.496 sys 0m3.232s 00:34:22.496 10:44:41 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:22.496 10:44:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:22.496 ************************************ 00:34:22.496 END TEST nvmf_identify_passthru 00:34:22.496 ************************************ 00:34:22.496 10:44:41 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:22.496 10:44:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:22.496 10:44:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.497 10:44:41 -- common/autotest_common.sh@10 -- # set +x 00:34:22.497 ************************************ 00:34:22.497 START TEST nvmf_dif 00:34:22.497 ************************************ 00:34:22.497 10:44:41 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:22.497 * Looking for test storage... 00:34:22.497 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:22.497 10:44:41 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:22.497 10:44:41 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:34:22.497 10:44:41 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:22.497 10:44:41 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:22.497 10:44:41 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:22.497 10:44:41 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:22.497 10:44:41 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:22.497 10:44:41 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:22.497 10:44:41 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:22.498 10:44:41 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:22.499 10:44:41 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:22.499 10:44:41 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:22.499 10:44:41 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:22.499 10:44:41 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:22.499 10:44:41 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:22.499 10:44:41 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:22.499 10:44:41 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:22.499 10:44:41 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:22.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.499 --rc genhtml_branch_coverage=1 00:34:22.499 --rc genhtml_function_coverage=1 00:34:22.499 --rc genhtml_legend=1 00:34:22.499 --rc geninfo_all_blocks=1 00:34:22.499 --rc geninfo_unexecuted_blocks=1 00:34:22.499 00:34:22.499 ' 00:34:22.499 10:44:41 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:22.499 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.499 --rc genhtml_branch_coverage=1 00:34:22.499 --rc genhtml_function_coverage=1 00:34:22.499 --rc genhtml_legend=1 00:34:22.500 --rc geninfo_all_blocks=1 00:34:22.500 --rc geninfo_unexecuted_blocks=1 00:34:22.500 00:34:22.500 ' 00:34:22.500 10:44:41 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:22.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.500 --rc genhtml_branch_coverage=1 00:34:22.500 --rc genhtml_function_coverage=1 00:34:22.500 --rc genhtml_legend=1 00:34:22.500 --rc geninfo_all_blocks=1 00:34:22.500 --rc geninfo_unexecuted_blocks=1 00:34:22.500 00:34:22.500 ' 00:34:22.500 10:44:41 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:22.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:22.500 --rc genhtml_branch_coverage=1 00:34:22.500 --rc genhtml_function_coverage=1 00:34:22.500 --rc genhtml_legend=1 00:34:22.500 --rc geninfo_all_blocks=1 00:34:22.500 --rc geninfo_unexecuted_blocks=1 00:34:22.500 00:34:22.500 ' 00:34:22.500 10:44:41 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:22.500 10:44:41 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:22.500 10:44:41 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:22.500 10:44:41 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:22.500 10:44:41 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:22.501 10:44:41 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:22.502 10:44:41 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:22.502 10:44:41 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:22.502 10:44:41 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:22.502 10:44:41 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:22.502 10:44:41 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:22.502 10:44:41 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.502 10:44:41 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.503 10:44:41 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.503 10:44:41 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:22.503 10:44:41 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:22.503 10:44:41 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:22.503 10:44:41 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:22.503 10:44:41 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:22.504 10:44:41 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:22.504 10:44:41 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:22.504 10:44:41 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:22.504 10:44:41 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:22.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:22.504 10:44:41 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:22.504 10:44:41 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:22.504 10:44:41 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:22.504 10:44:41 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:22.504 10:44:41 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:22.504 10:44:41 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:22.504 10:44:41 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:22.504 10:44:41 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:22.504 10:44:41 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:22.504 10:44:41 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:22.504 10:44:41 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:22.505 10:44:41 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:22.505 10:44:41 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:22.505 10:44:41 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.505 10:44:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:22.505 10:44:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:22.505 10:44:41 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:22.505 10:44:41 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:22.505 10:44:41 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:22.505 10:44:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.505 10:44:44 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:22.505 10:44:44 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:22.505 10:44:44 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:22.505 10:44:44 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:22.505 10:44:44 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:22.505 10:44:44 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:22.505 10:44:44 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:22.505 10:44:44 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:22.506 10:44:44 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:22.507 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.507 10:44:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:22.508 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.508 10:44:44 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:22.509 Found net devices under 0000:09:00.0: cvl_0_0 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:22.509 10:44:44 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:22.510 Found net devices under 0000:09:00.1: cvl_0_1 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:22.510 10:44:44 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:22.511 10:44:44 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:22.512 10:44:44 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:22.512 10:44:44 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:22.512 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:22.512 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:34:22.512 00:34:22.512 --- 10.0.0.2 ping statistics --- 00:34:22.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.512 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:34:22.512 10:44:44 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:22.512 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:22.512 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:34:22.512 00:34:22.512 --- 10.0.0.1 ping statistics --- 00:34:22.512 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:22.512 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:34:22.512 10:44:44 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:22.512 10:44:44 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:22.512 10:44:44 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:22.512 10:44:44 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:22.513 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:22.513 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:22.513 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:22.513 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:22.513 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:22.513 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:22.513 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:22.513 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:22.513 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:22.513 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:22.513 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:22.513 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:22.513 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:22.513 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:22.513 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:22.513 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:22.513 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:22.514 10:44:45 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:22.514 10:44:45 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:22.514 10:44:45 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:22.514 10:44:45 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:22.514 10:44:45 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:22.514 10:44:45 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:22.514 10:44:45 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:22.514 10:44:45 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:22.514 10:44:45 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:22.514 10:44:45 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:22.514 10:44:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.514 10:44:45 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2718429 00:34:22.514 10:44:45 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:22.514 10:44:45 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2718429 00:34:22.515 10:44:45 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2718429 ']' 00:34:22.515 10:44:45 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:22.515 10:44:45 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:22.515 10:44:45 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:22.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:22.515 10:44:45 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:22.515 10:44:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.515 [2024-12-09 10:44:45.562820] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:34:22.515 [2024-12-09 10:44:45.562920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:22.515 [2024-12-09 10:44:45.636265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:22.515 [2024-12-09 10:44:45.696233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:22.516 [2024-12-09 10:44:45.696292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:22.516 [2024-12-09 10:44:45.696321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:22.516 [2024-12-09 10:44:45.696333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:22.516 [2024-12-09 10:44:45.696343] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:22.516 [2024-12-09 10:44:45.696968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:22.516 10:44:45 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:22.516 10:44:45 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:22.516 10:44:45 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:22.516 10:44:45 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:22.516 10:44:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.516 10:44:45 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:22.516 10:44:45 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:22.517 10:44:45 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:22.517 10:44:45 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.517 10:44:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.517 [2024-12-09 10:44:45.849660] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:22.517 10:44:45 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.517 10:44:45 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:22.517 10:44:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:22.517 10:44:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:22.517 10:44:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:22.517 ************************************ 00:34:22.517 START TEST fio_dif_1_default 00:34:22.518 ************************************ 00:34:22.518 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:22.518 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:22.518 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:22.518 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:22.518 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:22.518 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:22.518 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:22.518 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.518 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:22.518 bdev_null0 00:34:22.518 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.519 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:22.519 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.519 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:22.519 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.519 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:22.519 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.519 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:22.519 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.519 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:22.519 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.519 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:22.519 [2024-12-09 10:44:45.905939] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:22.520 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.520 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:22.520 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:22.520 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:22.520 10:44:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:22.520 10:44:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:22.520 10:44:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:22.520 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.520 10:44:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:22.520 { 00:34:22.520 "params": { 00:34:22.520 "name": "Nvme$subsystem", 00:34:22.520 "trtype": "$TEST_TRANSPORT", 00:34:22.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:22.520 "adrfam": "ipv4", 00:34:22.520 "trsvcid": "$NVMF_PORT", 00:34:22.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:22.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:22.520 "hdgst": ${hdgst:-false}, 00:34:22.520 "ddgst": ${ddgst:-false} 00:34:22.520 }, 00:34:22.521 "method": "bdev_nvme_attach_controller" 00:34:22.521 } 00:34:22.521 EOF 00:34:22.521 )") 00:34:22.521 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:22.521 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.521 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:22.521 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:22.521 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:22.521 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:22.521 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:22.521 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.521 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:22.521 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:22.522 10:44:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:22.522 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.522 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.522 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:22.522 10:44:45 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:22.522 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:22.522 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:22.522 10:44:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:22.522 10:44:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:22.522 10:44:45 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:22.522 "params": { 00:34:22.522 "name": "Nvme0", 00:34:22.522 "trtype": "tcp", 00:34:22.522 "traddr": "10.0.0.2", 00:34:22.522 "adrfam": "ipv4", 00:34:22.522 "trsvcid": "4420", 00:34:22.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:22.523 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:22.523 "hdgst": false, 00:34:22.523 "ddgst": false 00:34:22.523 }, 00:34:22.523 "method": "bdev_nvme_attach_controller" 00:34:22.523 }' 00:34:22.523 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:22.523 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:22.523 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:22.523 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:22.523 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:22.523 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:22.523 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:22.523 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:22.524 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:22.524 10:44:45 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:22.524 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:22.524 fio-3.35 00:34:22.524 Starting 1 thread 00:34:24.530 00:34:24.530 filename0: (groupid=0, jobs=1): err= 0: pid=2718657: Mon Dec 9 10:44:56 2024 00:34:24.530 read: IOPS=99, BW=396KiB/s (406kB/s)(3968KiB/10014msec) 00:34:24.530 slat (nsec): min=6708, max=70196, avg=9545.93, stdev=5160.18 00:34:24.530 clat (usec): min=543, max=45455, avg=40346.11, stdev=5097.64 00:34:24.530 lat (usec): min=551, max=45490, avg=40355.66, stdev=5097.05 00:34:24.530 clat percentiles (usec): 00:34:24.530 | 1.00th=[ 644], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:24.530 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:24.530 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:24.530 | 99.00th=[41157], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:34:24.530 | 99.99th=[45351] 00:34:24.530 bw ( KiB/s): min= 383, max= 448, per=99.69%, avg=395.15, stdev=21.49, samples=20 00:34:24.530 iops : min= 95, max= 112, avg=98.75, stdev= 5.40, samples=20 00:34:24.530 lat (usec) : 750=1.61% 00:34:24.530 lat (msec) : 50=98.39% 00:34:24.530 cpu : usr=91.05%, sys=8.66%, ctx=19, majf=0, minf=261 00:34:24.530 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:24.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:24.530 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:24.530 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:24.530 00:34:24.530 Run status group 0 (all jobs): 00:34:24.530 READ: bw=396KiB/s (406kB/s), 396KiB/s-396KiB/s (406kB/s-406kB/s), io=3968KiB (4063kB), run=10014-10014msec 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.790 00:34:24.790 real 0m11.234s 00:34:24.790 user 0m10.262s 00:34:24.790 sys 0m1.128s 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:24.790 10:44:57 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:24.790 ************************************ 00:34:24.790 END TEST fio_dif_1_default 00:34:24.790 ************************************ 00:34:24.790 10:44:57 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:24.790 10:44:57 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:24.790 10:44:57 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:24.790 10:44:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:24.790 ************************************ 00:34:24.791 START TEST fio_dif_1_multi_subsystems 00:34:24.791 ************************************ 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.791 bdev_null0 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.791 [2024-12-09 10:44:57.195874] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.791 bdev_null1 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.791 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:25.057 { 00:34:25.057 "params": { 00:34:25.057 "name": "Nvme$subsystem", 00:34:25.057 "trtype": "$TEST_TRANSPORT", 00:34:25.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:25.057 "adrfam": "ipv4", 00:34:25.057 "trsvcid": "$NVMF_PORT", 00:34:25.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:25.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:25.057 "hdgst": ${hdgst:-false}, 00:34:25.057 "ddgst": ${ddgst:-false} 00:34:25.057 }, 00:34:25.057 "method": "bdev_nvme_attach_controller" 00:34:25.057 } 00:34:25.057 EOF 00:34:25.057 )") 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:25.057 { 00:34:25.057 "params": { 00:34:25.057 "name": "Nvme$subsystem", 00:34:25.057 "trtype": "$TEST_TRANSPORT", 00:34:25.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:25.057 "adrfam": "ipv4", 00:34:25.057 "trsvcid": "$NVMF_PORT", 00:34:25.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:25.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:25.057 "hdgst": ${hdgst:-false}, 00:34:25.057 "ddgst": ${ddgst:-false} 00:34:25.057 }, 00:34:25.057 "method": "bdev_nvme_attach_controller" 00:34:25.057 } 00:34:25.057 EOF 00:34:25.057 )") 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:25.057 "params": { 00:34:25.057 "name": "Nvme0", 00:34:25.057 "trtype": "tcp", 00:34:25.057 "traddr": "10.0.0.2", 00:34:25.057 "adrfam": "ipv4", 00:34:25.057 "trsvcid": "4420", 00:34:25.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:25.057 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:25.057 "hdgst": false, 00:34:25.057 "ddgst": false 00:34:25.057 }, 00:34:25.057 "method": "bdev_nvme_attach_controller" 00:34:25.057 },{ 00:34:25.057 "params": { 00:34:25.057 "name": "Nvme1", 00:34:25.057 "trtype": "tcp", 00:34:25.057 "traddr": "10.0.0.2", 00:34:25.057 "adrfam": "ipv4", 00:34:25.057 "trsvcid": "4420", 00:34:25.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:25.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:25.057 "hdgst": false, 00:34:25.057 "ddgst": false 00:34:25.057 }, 00:34:25.057 "method": "bdev_nvme_attach_controller" 00:34:25.057 }' 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:25.057 10:44:57 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:25.319 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:25.319 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:25.319 fio-3.35 00:34:25.319 Starting 2 threads 00:34:37.543 00:34:37.543 filename0: (groupid=0, jobs=1): err= 0: pid=2720092: Mon Dec 9 10:45:08 2024 00:34:37.543 read: IOPS=253, BW=1014KiB/s (1039kB/s)(9.91MiB/10001msec) 00:34:37.543 slat (nsec): min=7063, max=83605, avg=9040.48, stdev=3592.64 00:34:37.543 clat (usec): min=510, max=42676, avg=15745.29, stdev=19695.75 00:34:37.543 lat (usec): min=517, max=42689, avg=15754.33, stdev=19695.65 00:34:37.543 clat percentiles (usec): 00:34:37.543 | 1.00th=[ 553], 5.00th=[ 578], 10.00th=[ 586], 20.00th=[ 603], 00:34:37.543 | 30.00th=[ 619], 40.00th=[ 635], 50.00th=[ 660], 60.00th=[ 734], 00:34:37.543 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:34:37.543 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:37.543 | 99.99th=[42730] 00:34:37.543 bw ( KiB/s): min= 672, max= 1920, per=72.89%, avg=1025.68, stdev=391.84, samples=19 00:34:37.543 iops : min= 168, max= 480, avg=256.42, stdev=97.96, samples=19 00:34:37.543 lat (usec) : 750=60.53%, 1000=2.41% 00:34:37.543 lat (msec) : 50=37.07% 00:34:37.543 cpu : usr=94.92%, sys=4.72%, ctx=17, majf=0, minf=194 00:34:37.543 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.543 issued rwts: total=2536,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.543 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:37.543 filename1: (groupid=0, jobs=1): err= 0: pid=2720093: Mon Dec 9 10:45:08 2024 00:34:37.543 read: IOPS=98, BW=394KiB/s (404kB/s)(3952KiB/10024msec) 00:34:37.543 slat (nsec): min=7111, max=44404, avg=9540.79, stdev=3813.22 00:34:37.543 clat (usec): min=658, max=42390, avg=40550.13, stdev=4425.30 00:34:37.543 lat (usec): min=666, max=42433, avg=40559.67, stdev=4425.39 00:34:37.543 clat percentiles (usec): 00:34:37.543 | 1.00th=[ 758], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:37.543 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:37.543 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:34:37.543 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:37.543 | 99.99th=[42206] 00:34:37.543 bw ( KiB/s): min= 384, max= 416, per=27.95%, avg=393.60, stdev=15.05, samples=20 00:34:37.543 iops : min= 96, max= 104, avg=98.40, stdev= 3.76, samples=20 00:34:37.543 lat (usec) : 750=0.71%, 1000=0.51% 00:34:37.543 lat (msec) : 50=98.79% 00:34:37.543 cpu : usr=95.31%, sys=4.35%, ctx=14, majf=0, minf=166 00:34:37.543 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.543 issued rwts: total=988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.543 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:37.543 00:34:37.543 Run status group 0 (all jobs): 00:34:37.543 READ: bw=1406KiB/s (1440kB/s), 394KiB/s-1014KiB/s (404kB/s-1039kB/s), io=13.8MiB (14.4MB), run=10001-10024msec 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.543 00:34:37.543 real 0m11.457s 00:34:37.543 user 0m20.695s 00:34:37.543 sys 0m1.204s 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:37.543 10:45:08 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:37.543 ************************************ 00:34:37.543 END TEST fio_dif_1_multi_subsystems 00:34:37.543 ************************************ 00:34:37.543 10:45:08 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:37.543 10:45:08 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:37.543 10:45:08 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:37.543 10:45:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:37.543 ************************************ 00:34:37.543 START TEST fio_dif_rand_params 00:34:37.543 ************************************ 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.543 bdev_null0 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.543 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:37.544 [2024-12-09 10:45:08.694549] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:37.544 { 00:34:37.544 "params": { 00:34:37.544 "name": "Nvme$subsystem", 00:34:37.544 "trtype": "$TEST_TRANSPORT", 00:34:37.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:37.544 "adrfam": "ipv4", 00:34:37.544 "trsvcid": "$NVMF_PORT", 00:34:37.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:37.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:37.544 "hdgst": ${hdgst:-false}, 00:34:37.544 "ddgst": ${ddgst:-false} 00:34:37.544 }, 00:34:37.544 "method": "bdev_nvme_attach_controller" 00:34:37.544 } 00:34:37.544 EOF 00:34:37.544 )") 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:37.544 "params": { 00:34:37.544 "name": "Nvme0", 00:34:37.544 "trtype": "tcp", 00:34:37.544 "traddr": "10.0.0.2", 00:34:37.544 "adrfam": "ipv4", 00:34:37.544 "trsvcid": "4420", 00:34:37.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:37.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:37.544 "hdgst": false, 00:34:37.544 "ddgst": false 00:34:37.544 }, 00:34:37.544 "method": "bdev_nvme_attach_controller" 00:34:37.544 }' 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:37.544 10:45:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:37.544 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:37.544 ... 00:34:37.544 fio-3.35 00:34:37.544 Starting 3 threads 00:34:42.833 00:34:42.833 filename0: (groupid=0, jobs=1): err= 0: pid=2722108: Mon Dec 9 10:45:14 2024 00:34:42.833 read: IOPS=206, BW=25.8MiB/s (27.1MB/s)(130MiB/5047msec) 00:34:42.833 slat (nsec): min=4570, max=28901, avg=13862.48, stdev=1877.69 00:34:42.833 clat (usec): min=5252, max=90980, avg=14456.23, stdev=9065.17 00:34:42.833 lat (usec): min=5265, max=90994, avg=14470.09, stdev=9065.06 00:34:42.833 clat percentiles (usec): 00:34:42.833 | 1.00th=[ 7767], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[11076], 00:34:42.833 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12518], 60.00th=[13042], 00:34:42.833 | 70.00th=[13566], 80.00th=[14484], 90.00th=[15664], 95.00th=[19006], 00:34:42.833 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55313], 99.95th=[90702], 00:34:42.833 | 99.99th=[90702] 00:34:42.833 bw ( KiB/s): min=19968, max=32000, per=30.80%, avg=26624.00, stdev=3534.90, samples=10 00:34:42.833 iops : min= 156, max= 250, avg=208.00, stdev=27.62, samples=10 00:34:42.833 lat (msec) : 10=9.59%, 20=85.43%, 50=1.15%, 100=3.84% 00:34:42.833 cpu : usr=94.61%, sys=4.91%, ctx=6, majf=0, minf=80 00:34:42.833 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.833 issued rwts: total=1043,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.833 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:42.833 filename0: (groupid=0, jobs=1): err= 0: pid=2722109: Mon Dec 9 10:45:14 2024 00:34:42.833 read: IOPS=234, BW=29.4MiB/s (30.8MB/s)(147MiB/5006msec) 00:34:42.833 slat (nsec): min=4317, max=29318, avg=13841.46, stdev=1610.32 00:34:42.833 clat (usec): min=5528, max=53229, avg=12750.12, stdev=5435.07 00:34:42.833 lat (usec): min=5541, max=53243, avg=12763.96, stdev=5434.97 00:34:42.833 clat percentiles (usec): 00:34:42.833 | 1.00th=[ 6259], 5.00th=[ 7898], 10.00th=[ 8586], 20.00th=[ 9503], 00:34:42.833 | 30.00th=[11076], 40.00th=[11863], 50.00th=[12518], 60.00th=[13042], 00:34:42.833 | 70.00th=[13698], 80.00th=[14353], 90.00th=[15401], 95.00th=[16188], 00:34:42.833 | 99.00th=[51643], 99.50th=[53216], 99.90th=[53216], 99.95th=[53216], 00:34:42.833 | 99.99th=[53216] 00:34:42.833 bw ( KiB/s): min=25088, max=34560, per=34.74%, avg=30028.80, stdev=2807.06, samples=10 00:34:42.833 iops : min= 196, max= 270, avg=234.60, stdev=21.93, samples=10 00:34:42.833 lat (msec) : 10=22.53%, 20=75.94%, 50=0.26%, 100=1.28% 00:34:42.833 cpu : usr=93.55%, sys=5.93%, ctx=11, majf=0, minf=85 00:34:42.833 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.833 issued rwts: total=1176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.833 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:42.833 filename0: (groupid=0, jobs=1): err= 0: pid=2722110: Mon Dec 9 10:45:14 2024 00:34:42.833 read: IOPS=237, BW=29.7MiB/s (31.1MB/s)(149MiB/5006msec) 00:34:42.833 slat (nsec): min=4471, max=65028, avg=15489.19, stdev=4736.78 00:34:42.833 clat (usec): min=4556, max=57643, avg=12609.93, stdev=5792.58 00:34:42.833 lat (usec): min=4569, max=57658, avg=12625.42, stdev=5792.45 00:34:42.833 clat percentiles (usec): 00:34:42.833 | 1.00th=[ 5473], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[10028], 00:34:42.833 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12256], 60.00th=[12649], 00:34:42.833 | 70.00th=[13173], 80.00th=[13829], 90.00th=[14615], 95.00th=[15533], 00:34:42.833 | 99.00th=[52167], 99.50th=[53740], 99.90th=[57410], 99.95th=[57410], 00:34:42.833 | 99.99th=[57410] 00:34:42.833 bw ( KiB/s): min=26112, max=34816, per=35.13%, avg=30361.60, stdev=3048.68, samples=10 00:34:42.833 iops : min= 204, max= 272, avg=237.20, stdev=23.82, samples=10 00:34:42.833 lat (msec) : 10=20.10%, 20=77.88%, 50=0.67%, 100=1.35% 00:34:42.833 cpu : usr=87.69%, sys=8.61%, ctx=467, majf=0, minf=149 00:34:42.833 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:42.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.833 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:42.833 issued rwts: total=1189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:42.833 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:42.833 00:34:42.833 Run status group 0 (all jobs): 00:34:42.833 READ: bw=84.4MiB/s (88.5MB/s), 25.8MiB/s-29.7MiB/s (27.1MB/s-31.1MB/s), io=426MiB (447MB), run=5006-5047msec 00:34:42.833 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:42.833 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:42.833 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:42.833 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:42.833 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:42.833 10:45:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:42.833 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.833 10:45:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:42.833 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 bdev_null0 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 [2024-12-09 10:45:15.040709] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 bdev_null1 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 bdev_null2 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:42.834 { 00:34:42.834 "params": { 00:34:42.834 "name": "Nvme$subsystem", 00:34:42.834 "trtype": "$TEST_TRANSPORT", 00:34:42.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.834 "adrfam": "ipv4", 00:34:42.834 "trsvcid": "$NVMF_PORT", 00:34:42.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.834 "hdgst": ${hdgst:-false}, 00:34:42.834 "ddgst": ${ddgst:-false} 00:34:42.834 }, 00:34:42.834 "method": "bdev_nvme_attach_controller" 00:34:42.834 } 00:34:42.834 EOF 00:34:42.834 )") 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:42.834 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:42.834 { 00:34:42.834 "params": { 00:34:42.834 "name": "Nvme$subsystem", 00:34:42.834 "trtype": "$TEST_TRANSPORT", 00:34:42.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.834 "adrfam": "ipv4", 00:34:42.834 "trsvcid": "$NVMF_PORT", 00:34:42.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.834 "hdgst": ${hdgst:-false}, 00:34:42.834 "ddgst": ${ddgst:-false} 00:34:42.835 }, 00:34:42.835 "method": "bdev_nvme_attach_controller" 00:34:42.835 } 00:34:42.835 EOF 00:34:42.835 )") 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:42.835 { 00:34:42.835 "params": { 00:34:42.835 "name": "Nvme$subsystem", 00:34:42.835 "trtype": "$TEST_TRANSPORT", 00:34:42.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:42.835 "adrfam": "ipv4", 00:34:42.835 "trsvcid": "$NVMF_PORT", 00:34:42.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:42.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:42.835 "hdgst": ${hdgst:-false}, 00:34:42.835 "ddgst": ${ddgst:-false} 00:34:42.835 }, 00:34:42.835 "method": "bdev_nvme_attach_controller" 00:34:42.835 } 00:34:42.835 EOF 00:34:42.835 )") 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:42.835 "params": { 00:34:42.835 "name": "Nvme0", 00:34:42.835 "trtype": "tcp", 00:34:42.835 "traddr": "10.0.0.2", 00:34:42.835 "adrfam": "ipv4", 00:34:42.835 "trsvcid": "4420", 00:34:42.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:42.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:42.835 "hdgst": false, 00:34:42.835 "ddgst": false 00:34:42.835 }, 00:34:42.835 "method": "bdev_nvme_attach_controller" 00:34:42.835 },{ 00:34:42.835 "params": { 00:34:42.835 "name": "Nvme1", 00:34:42.835 "trtype": "tcp", 00:34:42.835 "traddr": "10.0.0.2", 00:34:42.835 "adrfam": "ipv4", 00:34:42.835 "trsvcid": "4420", 00:34:42.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:42.835 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:42.835 "hdgst": false, 00:34:42.835 "ddgst": false 00:34:42.835 }, 00:34:42.835 "method": "bdev_nvme_attach_controller" 00:34:42.835 },{ 00:34:42.835 "params": { 00:34:42.835 "name": "Nvme2", 00:34:42.835 "trtype": "tcp", 00:34:42.835 "traddr": "10.0.0.2", 00:34:42.835 "adrfam": "ipv4", 00:34:42.835 "trsvcid": "4420", 00:34:42.835 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:42.835 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:42.835 "hdgst": false, 00:34:42.835 "ddgst": false 00:34:42.835 }, 00:34:42.835 "method": "bdev_nvme_attach_controller" 00:34:42.835 }' 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:42.835 10:45:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:43.097 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:43.097 ... 00:34:43.097 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:43.097 ... 00:34:43.097 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:43.097 ... 00:34:43.097 fio-3.35 00:34:43.097 Starting 24 threads 00:34:55.345 00:34:55.345 filename0: (groupid=0, jobs=1): err= 0: pid=2722975: Mon Dec 9 10:45:26 2024 00:34:55.345 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10008msec) 00:34:55.345 slat (usec): min=10, max=110, avg=52.49, stdev=18.17 00:34:55.345 clat (usec): min=18834, max=36569, avg=33956.61, stdev=1281.71 00:34:55.345 lat (usec): min=18859, max=36612, avg=34009.10, stdev=1283.08 00:34:55.345 clat percentiles (usec): 00:34:55.345 | 1.00th=[27657], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:34:55.345 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:55.345 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.345 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:34:55.345 | 99.99th=[36439] 00:34:55.345 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1856.00, stdev=65.66, samples=20 00:34:55.345 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:34:55.345 lat (msec) : 20=0.34%, 50=99.66% 00:34:55.345 cpu : usr=98.42%, sys=1.19%, ctx=8, majf=0, minf=9 00:34:55.345 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:55.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.345 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.345 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.345 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.345 filename0: (groupid=0, jobs=1): err= 0: pid=2722976: Mon Dec 9 10:45:26 2024 00:34:55.345 read: IOPS=463, BW=1853KiB/s (1898kB/s)(18.1MiB/10014msec) 00:34:55.345 slat (usec): min=4, max=106, avg=31.81, stdev=24.41 00:34:55.345 clat (usec): min=15102, max=81814, avg=34241.17, stdev=2190.72 00:34:55.345 lat (usec): min=15129, max=81827, avg=34272.98, stdev=2187.51 00:34:55.345 clat percentiles (usec): 00:34:55.345 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:34:55.345 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:34:55.345 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:34:55.345 | 99.00th=[35914], 99.50th=[36439], 99.90th=[61604], 99.95th=[61604], 00:34:55.345 | 99.99th=[82314] 00:34:55.345 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1849.35, stdev=76.67, samples=20 00:34:55.345 iops : min= 416, max= 480, avg=462.30, stdev=19.26, samples=20 00:34:55.345 lat (msec) : 20=0.39%, 50=99.27%, 100=0.34% 00:34:55.345 cpu : usr=98.14%, sys=1.36%, ctx=67, majf=0, minf=9 00:34:55.345 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:55.345 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.345 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.345 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.346 filename0: (groupid=0, jobs=1): err= 0: pid=2722977: Mon Dec 9 10:45:26 2024 00:34:55.346 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10008msec) 00:34:55.346 slat (usec): min=11, max=168, avg=51.28, stdev=22.89 00:34:55.346 clat (usec): min=15483, max=36665, avg=33937.97, stdev=1300.72 00:34:55.346 lat (usec): min=15520, max=36713, avg=33989.26, stdev=1302.51 00:34:55.346 clat percentiles (usec): 00:34:55.346 | 1.00th=[27657], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:34:55.346 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:55.346 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.346 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:34:55.346 | 99.99th=[36439] 00:34:55.346 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1856.00, stdev=65.66, samples=20 00:34:55.346 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:34:55.346 lat (msec) : 20=0.34%, 50=99.66% 00:34:55.346 cpu : usr=98.22%, sys=1.26%, ctx=77, majf=0, minf=9 00:34:55.346 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.346 filename0: (groupid=0, jobs=1): err= 0: pid=2722978: Mon Dec 9 10:45:26 2024 00:34:55.346 read: IOPS=463, BW=1853KiB/s (1897kB/s)(18.1MiB/10018msec) 00:34:55.346 slat (usec): min=15, max=113, avg=73.75, stdev=10.22 00:34:55.346 clat (usec): min=18995, max=63329, avg=33883.24, stdev=1769.08 00:34:55.346 lat (usec): min=19063, max=63360, avg=33956.99, stdev=1766.71 00:34:55.346 clat percentiles (usec): 00:34:55.346 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:34:55.346 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:55.346 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.346 | 99.00th=[35390], 99.50th=[35914], 99.90th=[58459], 99.95th=[58459], 00:34:55.346 | 99.99th=[63177] 00:34:55.346 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1849.60, stdev=77.42, samples=20 00:34:55.346 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:34:55.346 lat (msec) : 20=0.30%, 50=99.35%, 100=0.34% 00:34:55.346 cpu : usr=98.29%, sys=1.26%, ctx=13, majf=0, minf=9 00:34:55.346 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.346 filename0: (groupid=0, jobs=1): err= 0: pid=2722979: Mon Dec 9 10:45:26 2024 00:34:55.346 read: IOPS=462, BW=1848KiB/s (1892kB/s)(18.1MiB/10008msec) 00:34:55.346 slat (usec): min=9, max=115, avg=33.70, stdev=20.44 00:34:55.346 clat (usec): min=19881, max=76465, avg=34315.02, stdev=2676.10 00:34:55.346 lat (usec): min=19904, max=76498, avg=34348.72, stdev=2675.27 00:34:55.346 clat percentiles (usec): 00:34:55.346 | 1.00th=[33424], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:34:55.346 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:34:55.346 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:34:55.346 | 99.00th=[35914], 99.50th=[40109], 99.90th=[76022], 99.95th=[76022], 00:34:55.346 | 99.99th=[76022] 00:34:55.346 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1843.35, stdev=76.21, samples=20 00:34:55.346 iops : min= 416, max= 480, avg=460.80, stdev=19.14, samples=20 00:34:55.346 lat (msec) : 20=0.09%, 50=99.57%, 100=0.35% 00:34:55.346 cpu : usr=98.12%, sys=1.47%, ctx=18, majf=0, minf=9 00:34:55.346 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.346 filename0: (groupid=0, jobs=1): err= 0: pid=2722980: Mon Dec 9 10:45:26 2024 00:34:55.346 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10008msec) 00:34:55.346 slat (usec): min=6, max=107, avg=30.84, stdev=15.91 00:34:55.346 clat (usec): min=17008, max=42347, avg=34157.21, stdev=1336.04 00:34:55.346 lat (usec): min=17018, max=42405, avg=34188.05, stdev=1335.51 00:34:55.346 clat percentiles (usec): 00:34:55.346 | 1.00th=[27395], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:34:55.346 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:34:55.346 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:34:55.346 | 99.00th=[35914], 99.50th=[36439], 99.90th=[40109], 99.95th=[40633], 00:34:55.346 | 99.99th=[42206] 00:34:55.346 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1856.00, stdev=65.66, samples=20 00:34:55.346 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:34:55.346 lat (msec) : 20=0.30%, 50=99.70% 00:34:55.346 cpu : usr=97.51%, sys=1.58%, ctx=162, majf=0, minf=9 00:34:55.346 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.346 filename0: (groupid=0, jobs=1): err= 0: pid=2722981: Mon Dec 9 10:45:26 2024 00:34:55.346 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10011msec) 00:34:55.346 slat (usec): min=16, max=123, avg=50.98, stdev=15.26 00:34:55.346 clat (usec): min=19907, max=51509, avg=34039.87, stdev=1370.09 00:34:55.346 lat (usec): min=19935, max=51543, avg=34090.85, stdev=1371.00 00:34:55.346 clat percentiles (usec): 00:34:55.346 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:34:55.346 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:55.346 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.346 | 99.00th=[35914], 99.50th=[36439], 99.90th=[51119], 99.95th=[51643], 00:34:55.346 | 99.99th=[51643] 00:34:55.346 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1849.75, stdev=77.04, samples=20 00:34:55.346 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:34:55.346 lat (msec) : 20=0.06%, 50=99.59%, 100=0.34% 00:34:55.346 cpu : usr=98.09%, sys=1.28%, ctx=82, majf=0, minf=10 00:34:55.346 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.346 filename0: (groupid=0, jobs=1): err= 0: pid=2722982: Mon Dec 9 10:45:26 2024 00:34:55.346 read: IOPS=462, BW=1849KiB/s (1894kB/s)(18.1MiB/10002msec) 00:34:55.346 slat (usec): min=5, max=150, avg=55.25, stdev=19.04 00:34:55.346 clat (usec): min=26622, max=62513, avg=34108.76, stdev=1785.22 00:34:55.346 lat (usec): min=26674, max=62530, avg=34164.01, stdev=1782.97 00:34:55.346 clat percentiles (usec): 00:34:55.346 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:34:55.346 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:55.346 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.346 | 99.00th=[35914], 99.50th=[40109], 99.90th=[62653], 99.95th=[62653], 00:34:55.346 | 99.99th=[62653] 00:34:55.346 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1846.05, stdev=77.30, samples=19 00:34:55.346 iops : min= 416, max= 480, avg=461.47, stdev=19.42, samples=19 00:34:55.346 lat (msec) : 50=99.65%, 100=0.35% 00:34:55.346 cpu : usr=98.31%, sys=1.28%, ctx=19, majf=0, minf=9 00:34:55.346 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:55.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.346 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.346 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.346 filename1: (groupid=0, jobs=1): err= 0: pid=2722983: Mon Dec 9 10:45:26 2024 00:34:55.346 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10008msec) 00:34:55.346 slat (usec): min=7, max=115, avg=21.57, stdev=16.97 00:34:55.346 clat (usec): min=18819, max=36565, avg=34210.79, stdev=1284.51 00:34:55.346 lat (usec): min=18861, max=36583, avg=34232.36, stdev=1282.51 00:34:55.346 clat percentiles (usec): 00:34:55.346 | 1.00th=[27657], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:34:55.346 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:34:55.346 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:34:55.347 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:34:55.347 | 99.99th=[36439] 00:34:55.347 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1856.00, stdev=65.66, samples=20 00:34:55.347 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:34:55.347 lat (msec) : 20=0.34%, 50=99.66% 00:34:55.347 cpu : usr=98.24%, sys=1.34%, ctx=17, majf=0, minf=9 00:34:55.347 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.347 filename1: (groupid=0, jobs=1): err= 0: pid=2722984: Mon Dec 9 10:45:26 2024 00:34:55.347 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10008msec) 00:34:55.347 slat (usec): min=9, max=106, avg=50.57, stdev=19.79 00:34:55.347 clat (usec): min=18757, max=36560, avg=34000.44, stdev=1285.18 00:34:55.347 lat (usec): min=18799, max=36585, avg=34051.01, stdev=1285.01 00:34:55.347 clat percentiles (usec): 00:34:55.347 | 1.00th=[27657], 5.00th=[33424], 10.00th=[33817], 20.00th=[33817], 00:34:55.347 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:34:55.347 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:34:55.347 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:34:55.347 | 99.99th=[36439] 00:34:55.347 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1856.00, stdev=65.66, samples=20 00:34:55.347 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:34:55.347 lat (msec) : 20=0.34%, 50=99.66% 00:34:55.347 cpu : usr=98.29%, sys=1.32%, ctx=19, majf=0, minf=9 00:34:55.347 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.347 filename1: (groupid=0, jobs=1): err= 0: pid=2722985: Mon Dec 9 10:45:26 2024 00:34:55.347 read: IOPS=462, BW=1849KiB/s (1893kB/s)(18.1MiB/10003msec) 00:34:55.347 slat (nsec): min=9713, max=54466, avg=25257.56, stdev=6712.80 00:34:55.347 clat (usec): min=20295, max=83022, avg=34380.39, stdev=2718.66 00:34:55.347 lat (usec): min=20319, max=83056, avg=34405.64, stdev=2718.10 00:34:55.347 clat percentiles (usec): 00:34:55.347 | 1.00th=[33817], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:34:55.347 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:34:55.347 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:34:55.347 | 99.00th=[35914], 99.50th=[36439], 99.90th=[77071], 99.95th=[77071], 00:34:55.347 | 99.99th=[83362] 00:34:55.347 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1845.89, stdev=77.69, samples=19 00:34:55.347 iops : min= 416, max= 480, avg=461.47, stdev=19.42, samples=19 00:34:55.347 lat (msec) : 50=99.65%, 100=0.35% 00:34:55.347 cpu : usr=98.37%, sys=1.16%, ctx=19, majf=0, minf=9 00:34:55.347 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.347 filename1: (groupid=0, jobs=1): err= 0: pid=2722986: Mon Dec 9 10:45:26 2024 00:34:55.347 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10008msec) 00:34:55.347 slat (usec): min=10, max=102, avg=42.27, stdev=18.72 00:34:55.347 clat (usec): min=15371, max=36589, avg=34075.69, stdev=1287.23 00:34:55.347 lat (usec): min=15421, max=36614, avg=34117.96, stdev=1286.50 00:34:55.347 clat percentiles (usec): 00:34:55.347 | 1.00th=[27657], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:34:55.347 | 30.00th=[33817], 40.00th=[33817], 50.00th=[34341], 60.00th=[34341], 00:34:55.347 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:34:55.347 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:34:55.347 | 99.99th=[36439] 00:34:55.347 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1856.00, stdev=65.66, samples=20 00:34:55.347 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:34:55.347 lat (msec) : 20=0.34%, 50=99.66% 00:34:55.347 cpu : usr=97.92%, sys=1.36%, ctx=94, majf=0, minf=9 00:34:55.347 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.347 filename1: (groupid=0, jobs=1): err= 0: pid=2722987: Mon Dec 9 10:45:26 2024 00:34:55.347 read: IOPS=463, BW=1854KiB/s (1899kB/s)(18.1MiB/10009msec) 00:34:55.347 slat (nsec): min=5366, max=41926, avg=15463.72, stdev=4655.63 00:34:55.347 clat (usec): min=33398, max=37629, avg=34368.66, stdev=429.11 00:34:55.347 lat (usec): min=33414, max=37652, avg=34384.12, stdev=428.74 00:34:55.347 clat percentiles (usec): 00:34:55.347 | 1.00th=[33817], 5.00th=[33817], 10.00th=[33817], 20.00th=[34341], 00:34:55.347 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:34:55.347 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:34:55.347 | 99.00th=[35914], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 00:34:55.347 | 99.99th=[37487] 00:34:55.347 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1849.60, stdev=65.33, samples=20 00:34:55.347 iops : min= 448, max= 480, avg=462.40, stdev=16.33, samples=20 00:34:55.347 lat (msec) : 50=100.00% 00:34:55.347 cpu : usr=95.83%, sys=2.50%, ctx=318, majf=0, minf=9 00:34:55.347 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.347 filename1: (groupid=0, jobs=1): err= 0: pid=2722988: Mon Dec 9 10:45:26 2024 00:34:55.347 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10012msec) 00:34:55.347 slat (nsec): min=4230, max=99052, avg=50670.95, stdev=17039.31 00:34:55.347 clat (usec): min=19853, max=52326, avg=34045.61, stdev=1422.74 00:34:55.347 lat (usec): min=19888, max=52338, avg=34096.28, stdev=1421.95 00:34:55.347 clat percentiles (usec): 00:34:55.347 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:34:55.347 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:55.347 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.347 | 99.00th=[35914], 99.50th=[36439], 99.90th=[52167], 99.95th=[52167], 00:34:55.347 | 99.99th=[52167] 00:34:55.347 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1849.60, stdev=77.42, samples=20 00:34:55.347 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:34:55.347 lat (msec) : 20=0.09%, 50=99.57%, 100=0.34% 00:34:55.347 cpu : usr=98.38%, sys=1.22%, ctx=13, majf=0, minf=9 00:34:55.347 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.347 filename1: (groupid=0, jobs=1): err= 0: pid=2722989: Mon Dec 9 10:45:26 2024 00:34:55.347 read: IOPS=463, BW=1853KiB/s (1898kB/s)(18.1MiB/10015msec) 00:34:55.347 slat (usec): min=15, max=137, avg=53.41, stdev=19.00 00:34:55.347 clat (usec): min=19849, max=54888, avg=34022.17, stdev=1550.07 00:34:55.347 lat (usec): min=19885, max=54934, avg=34075.58, stdev=1550.52 00:34:55.347 clat percentiles (usec): 00:34:55.347 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:34:55.347 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:55.347 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.347 | 99.00th=[35914], 99.50th=[36439], 99.90th=[54789], 99.95th=[54789], 00:34:55.347 | 99.99th=[54789] 00:34:55.347 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1849.20, stdev=77.05, samples=20 00:34:55.347 iops : min= 416, max= 480, avg=462.30, stdev=19.26, samples=20 00:34:55.347 lat (msec) : 20=0.09%, 50=99.57%, 100=0.34% 00:34:55.347 cpu : usr=97.00%, sys=1.82%, ctx=266, majf=0, minf=9 00:34:55.347 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.347 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.347 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.347 filename1: (groupid=0, jobs=1): err= 0: pid=2722990: Mon Dec 9 10:45:26 2024 00:34:55.348 read: IOPS=462, BW=1849KiB/s (1894kB/s)(18.1MiB/10002msec) 00:34:55.348 slat (nsec): min=8032, max=85050, avg=20199.76, stdev=11990.88 00:34:55.348 clat (usec): min=20345, max=82430, avg=34444.53, stdev=2716.27 00:34:55.348 lat (usec): min=20368, max=82449, avg=34464.73, stdev=2715.67 00:34:55.348 clat percentiles (usec): 00:34:55.348 | 1.00th=[33817], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:34:55.348 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:34:55.348 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:34:55.348 | 99.00th=[35914], 99.50th=[39584], 99.90th=[77071], 99.95th=[77071], 00:34:55.348 | 99.99th=[82314] 00:34:55.348 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1846.05, stdev=77.30, samples=19 00:34:55.348 iops : min= 416, max= 480, avg=461.47, stdev=19.42, samples=19 00:34:55.348 lat (msec) : 50=99.65%, 100=0.35% 00:34:55.348 cpu : usr=98.41%, sys=1.21%, ctx=12, majf=0, minf=9 00:34:55.348 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.348 filename2: (groupid=0, jobs=1): err= 0: pid=2722991: Mon Dec 9 10:45:26 2024 00:34:55.348 read: IOPS=463, BW=1853KiB/s (1897kB/s)(18.1MiB/10014msec) 00:34:55.348 slat (usec): min=4, max=107, avg=53.90, stdev=14.29 00:34:55.348 clat (usec): min=19893, max=71766, avg=34059.02, stdev=1677.72 00:34:55.348 lat (usec): min=19932, max=71778, avg=34112.92, stdev=1676.70 00:34:55.348 clat percentiles (usec): 00:34:55.348 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:34:55.348 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:55.348 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.348 | 99.00th=[35914], 99.50th=[36439], 99.90th=[54264], 99.95th=[54264], 00:34:55.348 | 99.99th=[71828] 00:34:55.348 bw ( KiB/s): min= 1651, max= 1920, per=4.16%, avg=1848.55, stdev=78.73, samples=20 00:34:55.348 iops : min= 412, max= 480, avg=462.10, stdev=19.78, samples=20 00:34:55.348 lat (msec) : 20=0.06%, 50=99.59%, 100=0.34% 00:34:55.348 cpu : usr=98.22%, sys=1.38%, ctx=23, majf=0, minf=9 00:34:55.348 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 issued rwts: total=4638,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.348 filename2: (groupid=0, jobs=1): err= 0: pid=2722992: Mon Dec 9 10:45:26 2024 00:34:55.348 read: IOPS=464, BW=1857KiB/s (1902kB/s)(18.2MiB/10029msec) 00:34:55.348 slat (usec): min=5, max=151, avg=80.67, stdev=18.85 00:34:55.348 clat (usec): min=15605, max=46534, avg=33715.12, stdev=1246.16 00:34:55.348 lat (usec): min=15657, max=46574, avg=33795.79, stdev=1248.66 00:34:55.348 clat percentiles (usec): 00:34:55.348 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:34:55.348 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:55.348 | 70.00th=[33817], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.348 | 99.00th=[35390], 99.50th=[35914], 99.90th=[42730], 99.95th=[42730], 00:34:55.348 | 99.99th=[46400] 00:34:55.348 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1856.00, stdev=65.66, samples=20 00:34:55.348 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:34:55.348 lat (msec) : 20=0.34%, 50=99.66% 00:34:55.348 cpu : usr=98.50%, sys=1.05%, ctx=15, majf=0, minf=9 00:34:55.348 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.348 filename2: (groupid=0, jobs=1): err= 0: pid=2722993: Mon Dec 9 10:45:26 2024 00:34:55.348 read: IOPS=463, BW=1852KiB/s (1897kB/s)(18.1MiB/10020msec) 00:34:55.348 slat (usec): min=4, max=138, avg=52.22, stdev=16.47 00:34:55.348 clat (usec): min=19956, max=60042, avg=34075.80, stdev=1785.08 00:34:55.348 lat (usec): min=19992, max=60054, avg=34128.03, stdev=1783.70 00:34:55.348 clat percentiles (usec): 00:34:55.348 | 1.00th=[33162], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:34:55.348 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:55.348 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.348 | 99.00th=[35914], 99.50th=[36439], 99.90th=[60031], 99.95th=[60031], 00:34:55.348 | 99.99th=[60031] 00:34:55.348 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1848.25, stdev=76.35, samples=20 00:34:55.348 iops : min= 416, max= 480, avg=462.05, stdev=19.08, samples=20 00:34:55.348 lat (msec) : 20=0.04%, 50=99.61%, 100=0.34% 00:34:55.348 cpu : usr=98.31%, sys=1.30%, ctx=19, majf=0, minf=9 00:34:55.348 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.348 filename2: (groupid=0, jobs=1): err= 0: pid=2722994: Mon Dec 9 10:45:26 2024 00:34:55.348 read: IOPS=463, BW=1853KiB/s (1897kB/s)(18.1MiB/10009msec) 00:34:55.348 slat (usec): min=8, max=109, avg=51.50, stdev=17.45 00:34:55.348 clat (usec): min=19916, max=89584, avg=34056.45, stdev=3023.06 00:34:55.348 lat (usec): min=19953, max=89607, avg=34107.95, stdev=3023.32 00:34:55.348 clat percentiles (usec): 00:34:55.348 | 1.00th=[26870], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:34:55.348 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:34:55.348 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.348 | 99.00th=[36439], 99.50th=[44303], 99.90th=[77071], 99.95th=[77071], 00:34:55.348 | 99.99th=[89654] 00:34:55.348 bw ( KiB/s): min= 1651, max= 1920, per=4.16%, avg=1848.15, stdev=76.32, samples=20 00:34:55.348 iops : min= 412, max= 480, avg=462.00, stdev=19.18, samples=20 00:34:55.348 lat (msec) : 20=0.06%, 50=99.59%, 100=0.35% 00:34:55.348 cpu : usr=96.77%, sys=2.04%, ctx=189, majf=0, minf=9 00:34:55.348 IO depths : 1=5.6%, 2=11.7%, 4=24.6%, 8=51.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 issued rwts: total=4636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.348 filename2: (groupid=0, jobs=1): err= 0: pid=2722995: Mon Dec 9 10:45:26 2024 00:34:55.348 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10008msec) 00:34:55.348 slat (usec): min=10, max=156, avg=53.99, stdev=24.39 00:34:55.348 clat (usec): min=15480, max=42802, avg=33923.08, stdev=1320.43 00:34:55.348 lat (usec): min=15509, max=42849, avg=33977.07, stdev=1322.34 00:34:55.348 clat percentiles (usec): 00:34:55.348 | 1.00th=[27657], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:34:55.348 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:55.348 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.348 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:34:55.348 | 99.99th=[42730] 00:34:55.348 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1856.00, stdev=65.66, samples=20 00:34:55.348 iops : min= 448, max= 480, avg=464.00, stdev=16.42, samples=20 00:34:55.348 lat (msec) : 20=0.34%, 50=99.66% 00:34:55.348 cpu : usr=98.23%, sys=1.36%, ctx=12, majf=0, minf=9 00:34:55.348 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.348 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.348 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.348 filename2: (groupid=0, jobs=1): err= 0: pid=2722996: Mon Dec 9 10:45:26 2024 00:34:55.348 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10008msec) 00:34:55.348 slat (usec): min=11, max=117, avg=53.21, stdev=17.15 00:34:55.348 clat (usec): min=17025, max=36897, avg=33932.93, stdev=1298.11 00:34:55.348 lat (usec): min=17075, max=36948, avg=33986.14, stdev=1300.24 00:34:55.348 clat percentiles (usec): 00:34:55.348 | 1.00th=[27657], 5.00th=[33424], 10.00th=[33424], 20.00th=[33817], 00:34:55.349 | 30.00th=[33817], 40.00th=[33817], 50.00th=[33817], 60.00th=[34341], 00:34:55.349 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34341], 95.00th=[34866], 00:34:55.349 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:34:55.349 | 99.99th=[36963] 00:34:55.349 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1856.00, stdev=64.21, samples=20 00:34:55.349 iops : min= 448, max= 480, avg=464.00, stdev=16.05, samples=20 00:34:55.349 lat (msec) : 20=0.34%, 50=99.66% 00:34:55.349 cpu : usr=97.98%, sys=1.44%, ctx=49, majf=0, minf=9 00:34:55.349 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.349 issued rwts: total=4656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.349 filename2: (groupid=0, jobs=1): err= 0: pid=2722997: Mon Dec 9 10:45:26 2024 00:34:55.349 read: IOPS=462, BW=1849KiB/s (1894kB/s)(18.1MiB/10002msec) 00:34:55.349 slat (usec): min=8, max=107, avg=39.74, stdev=22.07 00:34:55.349 clat (usec): min=20301, max=81885, avg=34243.33, stdev=2678.68 00:34:55.349 lat (usec): min=20313, max=81924, avg=34283.07, stdev=2676.75 00:34:55.349 clat percentiles (usec): 00:34:55.349 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:34:55.349 | 30.00th=[33817], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:34:55.349 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:34:55.349 | 99.00th=[35914], 99.50th=[36439], 99.90th=[76022], 99.95th=[76022], 00:34:55.349 | 99.99th=[82314] 00:34:55.349 bw ( KiB/s): min= 1667, max= 1920, per=4.15%, avg=1846.05, stdev=77.30, samples=19 00:34:55.349 iops : min= 416, max= 480, avg=461.47, stdev=19.42, samples=19 00:34:55.349 lat (msec) : 50=99.65%, 100=0.35% 00:34:55.349 cpu : usr=98.51%, sys=1.09%, ctx=15, majf=0, minf=9 00:34:55.349 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.349 issued rwts: total=4624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.349 filename2: (groupid=0, jobs=1): err= 0: pid=2722998: Mon Dec 9 10:45:26 2024 00:34:55.349 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10012msec) 00:34:55.349 slat (usec): min=8, max=100, avg=22.69, stdev=16.82 00:34:55.349 clat (usec): min=14955, max=79942, avg=34311.36, stdev=2076.38 00:34:55.349 lat (usec): min=14992, max=79975, avg=34334.05, stdev=2076.59 00:34:55.349 clat percentiles (usec): 00:34:55.349 | 1.00th=[33162], 5.00th=[33817], 10.00th=[33817], 20.00th=[33817], 00:34:55.349 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34341], 00:34:55.349 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[34866], 00:34:55.349 | 99.00th=[35914], 99.50th=[36439], 99.90th=[60031], 99.95th=[60031], 00:34:55.349 | 99.99th=[80217] 00:34:55.349 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1849.75, stdev=77.04, samples=20 00:34:55.349 iops : min= 416, max= 480, avg=462.40, stdev=19.35, samples=20 00:34:55.349 lat (msec) : 20=0.39%, 50=99.27%, 100=0.34% 00:34:55.349 cpu : usr=97.08%, sys=1.87%, ctx=149, majf=0, minf=9 00:34:55.349 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:55.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.349 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:55.349 issued rwts: total=4640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:55.349 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:55.349 00:34:55.349 Run status group 0 (all jobs): 00:34:55.349 READ: bw=43.4MiB/s (45.5MB/s), 1848KiB/s-1861KiB/s (1892kB/s-1906kB/s), io=435MiB (456MB), run=10002-10029msec 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.349 bdev_null0 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.349 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.349 [2024-12-09 10:45:26.989833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.350 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.350 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:55.350 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:55.350 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:55.350 10:45:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:55.350 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.350 10:45:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.350 bdev_null1 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:55.350 { 00:34:55.350 "params": { 00:34:55.350 "name": "Nvme$subsystem", 00:34:55.350 "trtype": "$TEST_TRANSPORT", 00:34:55.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.350 "adrfam": "ipv4", 00:34:55.350 "trsvcid": "$NVMF_PORT", 00:34:55.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.350 "hdgst": ${hdgst:-false}, 00:34:55.350 "ddgst": ${ddgst:-false} 00:34:55.350 }, 00:34:55.350 "method": "bdev_nvme_attach_controller" 00:34:55.350 } 00:34:55.350 EOF 00:34:55.350 )") 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:55.350 { 00:34:55.350 "params": { 00:34:55.350 "name": "Nvme$subsystem", 00:34:55.350 "trtype": "$TEST_TRANSPORT", 00:34:55.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.350 "adrfam": "ipv4", 00:34:55.350 "trsvcid": "$NVMF_PORT", 00:34:55.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.350 "hdgst": ${hdgst:-false}, 00:34:55.350 "ddgst": ${ddgst:-false} 00:34:55.350 }, 00:34:55.350 "method": "bdev_nvme_attach_controller" 00:34:55.350 } 00:34:55.350 EOF 00:34:55.350 )") 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:55.350 "params": { 00:34:55.350 "name": "Nvme0", 00:34:55.350 "trtype": "tcp", 00:34:55.350 "traddr": "10.0.0.2", 00:34:55.350 "adrfam": "ipv4", 00:34:55.350 "trsvcid": "4420", 00:34:55.350 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:55.350 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:55.350 "hdgst": false, 00:34:55.350 "ddgst": false 00:34:55.350 }, 00:34:55.350 "method": "bdev_nvme_attach_controller" 00:34:55.350 },{ 00:34:55.350 "params": { 00:34:55.350 "name": "Nvme1", 00:34:55.350 "trtype": "tcp", 00:34:55.350 "traddr": "10.0.0.2", 00:34:55.350 "adrfam": "ipv4", 00:34:55.350 "trsvcid": "4420", 00:34:55.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:55.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:55.350 "hdgst": false, 00:34:55.350 "ddgst": false 00:34:55.350 }, 00:34:55.350 "method": "bdev_nvme_attach_controller" 00:34:55.350 }' 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:55.350 10:45:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.350 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:55.350 ... 00:34:55.350 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:55.350 ... 00:34:55.350 fio-3.35 00:34:55.350 Starting 4 threads 00:35:01.969 00:35:01.969 filename0: (groupid=0, jobs=1): err= 0: pid=2724377: Mon Dec 9 10:45:33 2024 00:35:01.969 read: IOPS=1913, BW=14.9MiB/s (15.7MB/s)(74.7MiB/5001msec) 00:35:01.969 slat (nsec): min=3962, max=38833, avg=14530.02, stdev=4078.20 00:35:01.969 clat (usec): min=889, max=7609, avg=4126.98, stdev=475.76 00:35:01.969 lat (usec): min=903, max=7625, avg=4141.51, stdev=475.78 00:35:01.969 clat percentiles (usec): 00:35:01.969 | 1.00th=[ 2933], 5.00th=[ 3458], 10.00th=[ 3720], 20.00th=[ 4015], 00:35:01.969 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4146], 00:35:01.969 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4817], 00:35:01.969 | 99.00th=[ 6259], 99.50th=[ 6587], 99.90th=[ 7111], 99.95th=[ 7308], 00:35:01.969 | 99.99th=[ 7635] 00:35:01.969 bw ( KiB/s): min=14512, max=16448, per=25.27%, avg=15276.22, stdev=545.41, samples=9 00:35:01.969 iops : min= 1814, max= 2056, avg=1909.44, stdev=68.18, samples=9 00:35:01.969 lat (usec) : 1000=0.02% 00:35:01.969 lat (msec) : 2=0.32%, 4=18.77%, 10=80.88% 00:35:01.969 cpu : usr=95.16%, sys=4.28%, ctx=7, majf=0, minf=9 00:35:01.969 IO depths : 1=0.1%, 2=19.7%, 4=53.5%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.969 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.969 issued rwts: total=9567,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.969 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:01.969 filename0: (groupid=0, jobs=1): err= 0: pid=2724378: Mon Dec 9 10:45:33 2024 00:35:01.969 read: IOPS=1877, BW=14.7MiB/s (15.4MB/s)(74.0MiB/5042msec) 00:35:01.969 slat (nsec): min=4195, max=36656, avg=14782.20, stdev=4039.60 00:35:01.969 clat (usec): min=1402, max=43679, avg=4187.68, stdev=929.60 00:35:01.969 lat (usec): min=1416, max=43694, avg=4202.46, stdev=929.40 00:35:01.969 clat percentiles (usec): 00:35:01.969 | 1.00th=[ 2966], 5.00th=[ 3589], 10.00th=[ 3785], 20.00th=[ 4015], 00:35:01.969 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:35:01.969 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 5014], 00:35:01.969 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7439], 99.95th=[ 7701], 00:35:01.969 | 99.99th=[43779] 00:35:01.969 bw ( KiB/s): min=14176, max=15984, per=25.05%, avg=15140.80, stdev=511.00, samples=10 00:35:01.969 iops : min= 1772, max= 1998, avg=1892.60, stdev=63.88, samples=10 00:35:01.969 lat (msec) : 2=0.24%, 4=16.63%, 10=83.09%, 50=0.04% 00:35:01.969 cpu : usr=94.49%, sys=4.96%, ctx=12, majf=0, minf=9 00:35:01.969 IO depths : 1=0.1%, 2=18.9%, 4=54.1%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.969 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.969 issued rwts: total=9467,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.969 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:01.969 filename1: (groupid=0, jobs=1): err= 0: pid=2724379: Mon Dec 9 10:45:33 2024 00:35:01.969 read: IOPS=1934, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5003msec) 00:35:01.969 slat (nsec): min=4470, max=39228, avg=12207.34, stdev=4086.56 00:35:01.969 clat (usec): min=457, max=7779, avg=4092.76, stdev=417.67 00:35:01.969 lat (usec): min=471, max=7793, avg=4104.96, stdev=417.89 00:35:01.970 clat percentiles (usec): 00:35:01.970 | 1.00th=[ 3130], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3916], 00:35:01.970 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:35:01.970 | 70.00th=[ 4178], 80.00th=[ 4228], 90.00th=[ 4293], 95.00th=[ 4490], 00:35:01.970 | 99.00th=[ 5735], 99.50th=[ 7046], 99.90th=[ 7373], 99.95th=[ 7570], 00:35:01.970 | 99.99th=[ 7767] 00:35:01.970 bw ( KiB/s): min=14845, max=16464, per=25.60%, avg=15476.50, stdev=484.33, samples=10 00:35:01.970 iops : min= 1855, max= 2058, avg=1934.50, stdev=60.63, samples=10 00:35:01.970 lat (usec) : 500=0.01%, 1000=0.02% 00:35:01.970 lat (msec) : 2=0.07%, 4=22.66%, 10=77.23% 00:35:01.970 cpu : usr=92.74%, sys=5.92%, ctx=107, majf=0, minf=9 00:35:01.970 IO depths : 1=0.3%, 2=14.4%, 4=58.8%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.970 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.970 issued rwts: total=9677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.970 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:01.970 filename1: (groupid=0, jobs=1): err= 0: pid=2724380: Mon Dec 9 10:45:33 2024 00:35:01.970 read: IOPS=1875, BW=14.7MiB/s (15.4MB/s)(73.3MiB/5004msec) 00:35:01.970 slat (nsec): min=4101, max=66965, avg=14361.49, stdev=5650.57 00:35:01.970 clat (usec): min=776, max=7711, avg=4208.10, stdev=557.27 00:35:01.970 lat (usec): min=790, max=7726, avg=4222.47, stdev=557.13 00:35:01.970 clat percentiles (usec): 00:35:01.970 | 1.00th=[ 2900], 5.00th=[ 3654], 10.00th=[ 3851], 20.00th=[ 4047], 00:35:01.970 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4146], 60.00th=[ 4178], 00:35:01.970 | 70.00th=[ 4228], 80.00th=[ 4228], 90.00th=[ 4621], 95.00th=[ 5211], 00:35:01.970 | 99.00th=[ 6652], 99.50th=[ 6980], 99.90th=[ 7373], 99.95th=[ 7570], 00:35:01.970 | 99.99th=[ 7701] 00:35:01.970 bw ( KiB/s): min=14256, max=15360, per=24.83%, avg=15009.60, stdev=383.44, samples=10 00:35:01.970 iops : min= 1782, max= 1920, avg=1876.20, stdev=47.93, samples=10 00:35:01.970 lat (usec) : 1000=0.14% 00:35:01.970 lat (msec) : 2=0.45%, 4=12.77%, 10=86.64% 00:35:01.970 cpu : usr=87.83%, sys=8.14%, ctx=497, majf=0, minf=0 00:35:01.970 IO depths : 1=0.1%, 2=17.8%, 4=55.0%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:01.970 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.970 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:01.970 issued rwts: total=9386,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:01.970 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:01.970 00:35:01.970 Run status group 0 (all jobs): 00:35:01.970 READ: bw=59.0MiB/s (61.9MB/s), 14.7MiB/s-15.1MiB/s (15.4MB/s-15.8MB/s), io=298MiB (312MB), run=5001-5042msec 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.970 00:35:01.970 real 0m24.887s 00:35:01.970 user 4m32.716s 00:35:01.970 sys 0m6.546s 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:01.970 10:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:01.970 ************************************ 00:35:01.970 END TEST fio_dif_rand_params 00:35:01.970 ************************************ 00:35:01.970 10:45:33 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:01.970 10:45:33 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:01.970 10:45:33 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:01.970 10:45:33 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:01.970 ************************************ 00:35:01.970 START TEST fio_dif_digest 00:35:01.970 ************************************ 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:01.970 bdev_null0 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:01.970 [2024-12-09 10:45:33.623368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.970 10:45:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:01.970 { 00:35:01.971 "params": { 00:35:01.971 "name": "Nvme$subsystem", 00:35:01.971 "trtype": "$TEST_TRANSPORT", 00:35:01.971 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:01.971 "adrfam": "ipv4", 00:35:01.971 "trsvcid": "$NVMF_PORT", 00:35:01.971 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:01.971 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:01.971 "hdgst": ${hdgst:-false}, 00:35:01.971 "ddgst": ${ddgst:-false} 00:35:01.971 }, 00:35:01.971 "method": "bdev_nvme_attach_controller" 00:35:01.971 } 00:35:01.971 EOF 00:35:01.971 )") 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:01.971 "params": { 00:35:01.971 "name": "Nvme0", 00:35:01.971 "trtype": "tcp", 00:35:01.971 "traddr": "10.0.0.2", 00:35:01.971 "adrfam": "ipv4", 00:35:01.971 "trsvcid": "4420", 00:35:01.971 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:01.971 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:01.971 "hdgst": true, 00:35:01.971 "ddgst": true 00:35:01.971 }, 00:35:01.971 "method": "bdev_nvme_attach_controller" 00:35:01.971 }' 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:01.971 10:45:33 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:01.971 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:01.971 ... 00:35:01.971 fio-3.35 00:35:01.971 Starting 3 threads 00:35:14.195 00:35:14.195 filename0: (groupid=0, jobs=1): err= 0: pid=2725252: Mon Dec 9 10:45:44 2024 00:35:14.195 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(264MiB/10045msec) 00:35:14.195 slat (nsec): min=4516, max=66980, avg=15224.83, stdev=1970.47 00:35:14.195 clat (usec): min=8727, max=52061, avg=14230.68, stdev=1538.21 00:35:14.195 lat (usec): min=8741, max=52078, avg=14245.91, stdev=1538.25 00:35:14.195 clat percentiles (usec): 00:35:14.195 | 1.00th=[11600], 5.00th=[12649], 10.00th=[12911], 20.00th=[13304], 00:35:14.195 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14484], 00:35:14.195 | 70.00th=[14746], 80.00th=[15008], 90.00th=[15401], 95.00th=[15795], 00:35:14.195 | 99.00th=[16712], 99.50th=[17171], 99.90th=[20055], 99.95th=[49546], 00:35:14.195 | 99.99th=[52167] 00:35:14.195 bw ( KiB/s): min=26368, max=27648, per=34.71%, avg=27008.00, stdev=357.24, samples=20 00:35:14.195 iops : min= 206, max= 216, avg=211.00, stdev= 2.79, samples=20 00:35:14.195 lat (msec) : 10=0.28%, 20=99.48%, 50=0.19%, 100=0.05% 00:35:14.195 cpu : usr=94.25%, sys=5.21%, ctx=40, majf=0, minf=161 00:35:14.195 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.195 issued rwts: total=2112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.195 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:14.195 filename0: (groupid=0, jobs=1): err= 0: pid=2725253: Mon Dec 9 10:45:44 2024 00:35:14.195 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(251MiB/10045msec) 00:35:14.195 slat (nsec): min=4347, max=36326, avg=14853.62, stdev=1336.38 00:35:14.195 clat (usec): min=9321, max=55307, avg=14976.26, stdev=1589.72 00:35:14.195 lat (usec): min=9336, max=55321, avg=14991.11, stdev=1589.71 00:35:14.195 clat percentiles (usec): 00:35:14.195 | 1.00th=[12387], 5.00th=[13304], 10.00th=[13698], 20.00th=[14222], 00:35:14.195 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[15139], 00:35:14.195 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16188], 95.00th=[16712], 00:35:14.195 | 99.00th=[17695], 99.50th=[17957], 99.90th=[25035], 99.95th=[48497], 00:35:14.195 | 99.99th=[55313] 00:35:14.195 bw ( KiB/s): min=24576, max=26624, per=32.98%, avg=25666.60, stdev=417.36, samples=20 00:35:14.195 iops : min= 192, max= 208, avg=200.50, stdev= 3.24, samples=20 00:35:14.195 lat (msec) : 10=0.30%, 20=99.45%, 50=0.20%, 100=0.05% 00:35:14.195 cpu : usr=94.50%, sys=5.01%, ctx=17, majf=0, minf=137 00:35:14.195 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.195 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.195 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.195 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.195 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:14.195 filename0: (groupid=0, jobs=1): err= 0: pid=2725254: Mon Dec 9 10:45:44 2024 00:35:14.195 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(249MiB/10045msec) 00:35:14.195 slat (nsec): min=4196, max=37598, avg=14994.70, stdev=1678.17 00:35:14.195 clat (usec): min=11545, max=55060, avg=15119.02, stdev=2089.16 00:35:14.195 lat (usec): min=11562, max=55076, avg=15134.02, stdev=2089.07 00:35:14.195 clat percentiles (usec): 00:35:14.195 | 1.00th=[12780], 5.00th=[13566], 10.00th=[13829], 20.00th=[14222], 00:35:14.195 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15270], 00:35:14.195 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16188], 95.00th=[16581], 00:35:14.195 | 99.00th=[17695], 99.50th=[19006], 99.90th=[55313], 99.95th=[55313], 00:35:14.196 | 99.99th=[55313] 00:35:14.196 bw ( KiB/s): min=23040, max=26112, per=32.67%, avg=25420.80, stdev=690.43, samples=20 00:35:14.196 iops : min= 180, max= 204, avg=198.60, stdev= 5.39, samples=20 00:35:14.196 lat (msec) : 20=99.60%, 50=0.25%, 100=0.15% 00:35:14.196 cpu : usr=94.57%, sys=4.93%, ctx=14, majf=0, minf=155 00:35:14.196 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:14.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:14.196 issued rwts: total=1988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:14.196 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:14.196 00:35:14.196 Run status group 0 (all jobs): 00:35:14.196 READ: bw=76.0MiB/s (79.7MB/s), 24.7MiB/s-26.3MiB/s (25.9MB/s-27.6MB/s), io=763MiB (800MB), run=10045-10045msec 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.196 00:35:14.196 real 0m11.336s 00:35:14.196 user 0m29.730s 00:35:14.196 sys 0m1.833s 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.196 10:45:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:14.196 ************************************ 00:35:14.196 END TEST fio_dif_digest 00:35:14.196 ************************************ 00:35:14.196 10:45:44 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:14.196 10:45:44 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:14.196 10:45:44 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:14.196 10:45:44 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:14.196 10:45:44 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:14.196 10:45:44 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:14.196 10:45:44 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:14.196 10:45:44 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:14.196 rmmod nvme_tcp 00:35:14.196 rmmod nvme_fabrics 00:35:14.196 rmmod nvme_keyring 00:35:14.196 10:45:45 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:14.196 10:45:45 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:14.196 10:45:45 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:14.196 10:45:45 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2718429 ']' 00:35:14.196 10:45:45 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2718429 00:35:14.196 10:45:45 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2718429 ']' 00:35:14.196 10:45:45 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2718429 00:35:14.196 10:45:45 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:14.196 10:45:45 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.196 10:45:45 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2718429 00:35:14.196 10:45:45 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:14.196 10:45:45 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:14.196 10:45:45 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2718429' 00:35:14.196 killing process with pid 2718429 00:35:14.196 10:45:45 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2718429 00:35:14.196 10:45:45 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2718429 00:35:14.196 10:45:45 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:14.196 10:45:45 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:14.196 Waiting for block devices as requested 00:35:14.196 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:14.196 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:14.457 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:14.457 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:14.457 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:14.457 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:14.717 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:14.717 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:14.717 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:14.977 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:14.977 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:14.977 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:15.237 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:15.237 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:15.237 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:15.237 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:15.237 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:15.498 10:45:47 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:15.498 10:45:47 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:15.498 10:45:47 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:15.498 10:45:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:15.498 10:45:47 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:15.498 10:45:47 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:15.498 10:45:47 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:15.498 10:45:47 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:15.498 10:45:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:15.498 10:45:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:15.498 10:45:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.420 10:45:49 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:17.420 00:35:17.420 real 1m8.130s 00:35:17.420 user 6m31.397s 00:35:17.420 sys 0m17.767s 00:35:17.420 10:45:49 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:17.420 10:45:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:17.420 ************************************ 00:35:17.420 END TEST nvmf_dif 00:35:17.420 ************************************ 00:35:17.420 10:45:49 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:17.420 10:45:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:17.420 10:45:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:17.420 10:45:49 -- common/autotest_common.sh@10 -- # set +x 00:35:17.681 ************************************ 00:35:17.681 START TEST nvmf_abort_qd_sizes 00:35:17.681 ************************************ 00:35:17.681 10:45:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:17.681 * Looking for test storage... 00:35:17.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:17.681 10:45:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:17.681 10:45:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:35:17.681 10:45:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:17.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.681 --rc genhtml_branch_coverage=1 00:35:17.681 --rc genhtml_function_coverage=1 00:35:17.681 --rc genhtml_legend=1 00:35:17.681 --rc geninfo_all_blocks=1 00:35:17.681 --rc geninfo_unexecuted_blocks=1 00:35:17.681 00:35:17.681 ' 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:17.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.681 --rc genhtml_branch_coverage=1 00:35:17.681 --rc genhtml_function_coverage=1 00:35:17.681 --rc genhtml_legend=1 00:35:17.681 --rc geninfo_all_blocks=1 00:35:17.681 --rc geninfo_unexecuted_blocks=1 00:35:17.681 00:35:17.681 ' 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:17.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.681 --rc genhtml_branch_coverage=1 00:35:17.681 --rc genhtml_function_coverage=1 00:35:17.681 --rc genhtml_legend=1 00:35:17.681 --rc geninfo_all_blocks=1 00:35:17.681 --rc geninfo_unexecuted_blocks=1 00:35:17.681 00:35:17.681 ' 00:35:17.681 10:45:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:17.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:17.681 --rc genhtml_branch_coverage=1 00:35:17.681 --rc genhtml_function_coverage=1 00:35:17.681 --rc genhtml_legend=1 00:35:17.681 --rc geninfo_all_blocks=1 00:35:17.682 --rc geninfo_unexecuted_blocks=1 00:35:17.682 00:35:17.682 ' 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:17.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:17.682 10:45:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:20.235 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:20.235 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:20.235 Found net devices under 0000:09:00.0: cvl_0_0 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:20.235 Found net devices under 0000:09:00.1: cvl_0_1 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:20.235 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:20.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:20.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:35:20.235 00:35:20.235 --- 10.0.0.2 ping statistics --- 00:35:20.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.236 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:35:20.236 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:20.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:20.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:35:20.236 00:35:20.236 --- 10.0.0.1 ping statistics --- 00:35:20.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:20.236 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:35:20.236 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:20.236 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:20.236 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:20.236 10:45:52 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:21.173 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:21.173 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:21.173 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:21.173 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:21.173 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:21.173 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:21.173 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:21.173 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:21.173 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:21.173 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:21.173 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:21.173 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:21.173 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:21.173 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:21.173 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:21.173 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:22.109 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2730174 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2730174 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2730174 ']' 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:22.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:22.368 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:22.368 [2024-12-09 10:45:54.688337] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:35:22.368 [2024-12-09 10:45:54.688427] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:22.368 [2024-12-09 10:45:54.764501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:22.626 [2024-12-09 10:45:54.822403] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:22.626 [2024-12-09 10:45:54.822469] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:22.626 [2024-12-09 10:45:54.822481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:22.626 [2024-12-09 10:45:54.822492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:22.627 [2024-12-09 10:45:54.822515] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:22.627 [2024-12-09 10:45:54.823998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:22.627 [2024-12-09 10:45:54.824108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:22.627 [2024-12-09 10:45:54.824183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:22.627 [2024-12-09 10:45:54.824187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:22.627 10:45:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:22.627 ************************************ 00:35:22.627 START TEST spdk_target_abort 00:35:22.627 ************************************ 00:35:22.627 10:45:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:22.627 10:45:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:22.627 10:45:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:35:22.627 10:45:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.627 10:45:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:25.909 spdk_targetn1 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:25.909 [2024-12-09 10:45:57.833597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:25.909 [2024-12-09 10:45:57.873000] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:25.909 10:45:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:29.199 Initializing NVMe Controllers 00:35:29.199 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:29.199 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:29.199 Initialization complete. Launching workers. 00:35:29.199 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11222, failed: 0 00:35:29.199 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 9988 00:35:29.199 success 749, unsuccessful 485, failed 0 00:35:29.199 10:46:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:29.199 10:46:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:32.492 Initializing NVMe Controllers 00:35:32.492 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:32.492 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:32.492 Initialization complete. Launching workers. 00:35:32.492 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8897, failed: 0 00:35:32.492 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1192, failed to submit 7705 00:35:32.492 success 366, unsuccessful 826, failed 0 00:35:32.492 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:32.492 10:46:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:35.852 Initializing NVMe Controllers 00:35:35.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:35.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:35.852 Initialization complete. Launching workers. 00:35:35.852 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31081, failed: 0 00:35:35.852 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2534, failed to submit 28547 00:35:35.852 success 525, unsuccessful 2009, failed 0 00:35:35.852 10:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:35.852 10:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.852 10:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:35.852 10:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.852 10:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:35.852 10:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.852 10:46:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:36.790 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.790 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2730174 00:35:36.790 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2730174 ']' 00:35:36.790 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2730174 00:35:36.790 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:36.790 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:36.790 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2730174 00:35:37.051 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:37.051 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:37.051 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2730174' 00:35:37.051 killing process with pid 2730174 00:35:37.051 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2730174 00:35:37.051 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2730174 00:35:37.311 00:35:37.311 real 0m14.515s 00:35:37.311 user 0m54.560s 00:35:37.311 sys 0m2.818s 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:37.311 ************************************ 00:35:37.311 END TEST spdk_target_abort 00:35:37.311 ************************************ 00:35:37.311 10:46:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:37.311 10:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:37.311 10:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:37.311 10:46:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:37.311 ************************************ 00:35:37.311 START TEST kernel_target_abort 00:35:37.311 ************************************ 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:37.311 10:46:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:38.251 Waiting for block devices as requested 00:35:38.511 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:38.511 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:38.511 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:38.771 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:38.771 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:38.771 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:38.771 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:39.031 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:39.031 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:39.291 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:39.291 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:39.291 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:39.292 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:39.553 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:39.553 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:39.553 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:39.553 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:39.814 No valid GPT data, bailing 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:39.814 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:35:40.075 00:35:40.075 Discovery Log Number of Records 2, Generation counter 2 00:35:40.075 =====Discovery Log Entry 0====== 00:35:40.075 trtype: tcp 00:35:40.075 adrfam: ipv4 00:35:40.075 subtype: current discovery subsystem 00:35:40.075 treq: not specified, sq flow control disable supported 00:35:40.075 portid: 1 00:35:40.075 trsvcid: 4420 00:35:40.075 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:40.075 traddr: 10.0.0.1 00:35:40.075 eflags: none 00:35:40.075 sectype: none 00:35:40.075 =====Discovery Log Entry 1====== 00:35:40.075 trtype: tcp 00:35:40.075 adrfam: ipv4 00:35:40.075 subtype: nvme subsystem 00:35:40.075 treq: not specified, sq flow control disable supported 00:35:40.075 portid: 1 00:35:40.075 trsvcid: 4420 00:35:40.075 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:40.075 traddr: 10.0.0.1 00:35:40.075 eflags: none 00:35:40.075 sectype: none 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:40.075 10:46:12 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:43.365 Initializing NVMe Controllers 00:35:43.365 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:43.365 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:43.365 Initialization complete. Launching workers. 00:35:43.365 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48697, failed: 0 00:35:43.365 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48697, failed to submit 0 00:35:43.365 success 0, unsuccessful 48697, failed 0 00:35:43.365 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:43.365 10:46:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:46.670 Initializing NVMe Controllers 00:35:46.670 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:46.670 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:46.670 Initialization complete. Launching workers. 00:35:46.670 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95218, failed: 0 00:35:46.670 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21226, failed to submit 73992 00:35:46.670 success 0, unsuccessful 21226, failed 0 00:35:46.670 10:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:46.670 10:46:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:49.962 Initializing NVMe Controllers 00:35:49.962 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:49.962 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:49.962 Initialization complete. Launching workers. 00:35:49.962 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 88243, failed: 0 00:35:49.962 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22054, failed to submit 66189 00:35:49.962 success 0, unsuccessful 22054, failed 0 00:35:49.962 10:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:49.962 10:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:49.962 10:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:49.962 10:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:49.962 10:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:49.962 10:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:49.962 10:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:49.962 10:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:49.962 10:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:49.962 10:46:21 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:50.904 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:50.904 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:50.904 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:50.904 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:50.904 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:50.904 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:50.904 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:50.904 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:50.904 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:50.904 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:50.904 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:50.904 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:50.904 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:50.904 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:50.904 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:50.904 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:51.848 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:35:51.848 00:35:51.848 real 0m14.689s 00:35:51.848 user 0m6.190s 00:35:51.848 sys 0m3.639s 00:35:51.848 10:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:51.848 10:46:24 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:51.848 ************************************ 00:35:51.848 END TEST kernel_target_abort 00:35:51.848 ************************************ 00:35:51.848 10:46:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:51.848 10:46:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:51.848 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:51.848 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:51.848 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:51.848 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:51.848 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:51.848 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:51.848 rmmod nvme_tcp 00:35:52.110 rmmod nvme_fabrics 00:35:52.110 rmmod nvme_keyring 00:35:52.110 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:52.110 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:52.110 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:52.110 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2730174 ']' 00:35:52.110 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2730174 00:35:52.110 10:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2730174 ']' 00:35:52.110 10:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2730174 00:35:52.110 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2730174) - No such process 00:35:52.110 10:46:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2730174 is not found' 00:35:52.110 Process with pid 2730174 is not found 00:35:52.110 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:52.110 10:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:53.048 Waiting for block devices as requested 00:35:53.049 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:53.307 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:53.307 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:53.307 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:53.570 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:53.570 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:53.570 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:53.570 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:53.830 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:53.830 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:54.091 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:54.091 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:54.091 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:54.091 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:54.355 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:54.355 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:54.355 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:54.617 10:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:54.617 10:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:54.617 10:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:35:54.617 10:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:35:54.617 10:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:54.617 10:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:35:54.617 10:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:54.617 10:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:54.617 10:46:26 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:54.617 10:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:54.617 10:46:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:56.531 10:46:28 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:56.531 00:35:56.531 real 0m39.014s 00:35:56.531 user 1m2.992s 00:35:56.531 sys 0m10.101s 00:35:56.531 10:46:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.531 10:46:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:56.531 ************************************ 00:35:56.531 END TEST nvmf_abort_qd_sizes 00:35:56.531 ************************************ 00:35:56.531 10:46:28 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:56.531 10:46:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:56.531 10:46:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.531 10:46:28 -- common/autotest_common.sh@10 -- # set +x 00:35:56.531 ************************************ 00:35:56.531 START TEST keyring_file 00:35:56.531 ************************************ 00:35:56.531 10:46:28 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:56.531 * Looking for test storage... 00:35:56.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:56.792 10:46:28 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:56.792 10:46:28 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:35:56.792 10:46:28 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:56.792 10:46:29 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:56.792 10:46:29 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:56.792 10:46:29 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:56.792 10:46:29 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:56.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.792 --rc genhtml_branch_coverage=1 00:35:56.792 --rc genhtml_function_coverage=1 00:35:56.792 --rc genhtml_legend=1 00:35:56.792 --rc geninfo_all_blocks=1 00:35:56.792 --rc geninfo_unexecuted_blocks=1 00:35:56.792 00:35:56.792 ' 00:35:56.792 10:46:29 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:56.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.792 --rc genhtml_branch_coverage=1 00:35:56.792 --rc genhtml_function_coverage=1 00:35:56.792 --rc genhtml_legend=1 00:35:56.792 --rc geninfo_all_blocks=1 00:35:56.792 --rc geninfo_unexecuted_blocks=1 00:35:56.792 00:35:56.792 ' 00:35:56.792 10:46:29 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:56.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.792 --rc genhtml_branch_coverage=1 00:35:56.792 --rc genhtml_function_coverage=1 00:35:56.792 --rc genhtml_legend=1 00:35:56.792 --rc geninfo_all_blocks=1 00:35:56.792 --rc geninfo_unexecuted_blocks=1 00:35:56.792 00:35:56.792 ' 00:35:56.792 10:46:29 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:56.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.792 --rc genhtml_branch_coverage=1 00:35:56.792 --rc genhtml_function_coverage=1 00:35:56.792 --rc genhtml_legend=1 00:35:56.792 --rc geninfo_all_blocks=1 00:35:56.792 --rc geninfo_unexecuted_blocks=1 00:35:56.792 00:35:56.792 ' 00:35:56.792 10:46:29 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:56.792 10:46:29 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:56.792 10:46:29 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.793 10:46:29 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:56.793 10:46:29 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.793 10:46:29 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.793 10:46:29 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.793 10:46:29 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.793 10:46:29 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.793 10:46:29 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.793 10:46:29 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:56.793 10:46:29 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@51 -- # : 0 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:56.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.S7CA1w34SK 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.S7CA1w34SK 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.S7CA1w34SK 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.S7CA1w34SK 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qvydw46234 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:35:56.793 10:46:29 keyring_file -- nvmf/common.sh@733 -- # python - 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qvydw46234 00:35:56.793 10:46:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qvydw46234 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.qvydw46234 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@30 -- # tgtpid=2736064 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:56.793 10:46:29 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2736064 00:35:56.793 10:46:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2736064 ']' 00:35:56.793 10:46:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:56.793 10:46:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:56.793 10:46:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:56.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:56.793 10:46:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:56.793 10:46:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:56.793 [2024-12-09 10:46:29.211574] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:35:56.793 [2024-12-09 10:46:29.211644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2736064 ] 00:35:57.053 [2024-12-09 10:46:29.281612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.053 [2024-12-09 10:46:29.337714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:57.313 10:46:29 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.313 [2024-12-09 10:46:29.587811] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.313 null0 00:35:57.313 [2024-12-09 10:46:29.619861] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:57.313 [2024-12-09 10:46:29.620359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.313 10:46:29 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.313 [2024-12-09 10:46:29.643905] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:57.313 request: 00:35:57.313 { 00:35:57.313 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:57.313 "secure_channel": false, 00:35:57.313 "listen_address": { 00:35:57.313 "trtype": "tcp", 00:35:57.313 "traddr": "127.0.0.1", 00:35:57.313 "trsvcid": "4420" 00:35:57.313 }, 00:35:57.313 "method": "nvmf_subsystem_add_listener", 00:35:57.313 "req_id": 1 00:35:57.313 } 00:35:57.313 Got JSON-RPC error response 00:35:57.313 response: 00:35:57.313 { 00:35:57.313 "code": -32602, 00:35:57.313 "message": "Invalid parameters" 00:35:57.313 } 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:57.313 10:46:29 keyring_file -- keyring/file.sh@47 -- # bperfpid=2736074 00:35:57.313 10:46:29 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:57.313 10:46:29 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2736074 /var/tmp/bperf.sock 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2736074 ']' 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:57.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:57.313 10:46:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:57.313 [2024-12-09 10:46:29.693498] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:35:57.313 [2024-12-09 10:46:29.693585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2736074 ] 00:35:57.572 [2024-12-09 10:46:29.761617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.572 [2024-12-09 10:46:29.820771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:57.572 10:46:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:57.572 10:46:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:57.572 10:46:29 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S7CA1w34SK 00:35:57.572 10:46:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S7CA1w34SK 00:35:57.831 10:46:30 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qvydw46234 00:35:57.831 10:46:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qvydw46234 00:35:58.091 10:46:30 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:58.091 10:46:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:58.091 10:46:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.091 10:46:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.091 10:46:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:58.351 10:46:30 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.S7CA1w34SK == \/\t\m\p\/\t\m\p\.\S\7\C\A\1\w\3\4\S\K ]] 00:35:58.351 10:46:30 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:58.351 10:46:30 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:58.351 10:46:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.351 10:46:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.351 10:46:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:58.611 10:46:31 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.qvydw46234 == \/\t\m\p\/\t\m\p\.\q\v\y\d\w\4\6\2\3\4 ]] 00:35:58.611 10:46:31 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:58.611 10:46:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:58.611 10:46:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:58.611 10:46:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.611 10:46:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.611 10:46:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:59.180 10:46:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:59.180 10:46:31 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:59.180 10:46:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:59.180 10:46:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:59.180 10:46:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.180 10:46:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.180 10:46:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:59.180 10:46:31 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:59.180 10:46:31 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:59.180 10:46:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:59.440 [2024-12-09 10:46:31.861093] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:59.701 nvme0n1 00:35:59.701 10:46:31 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:59.701 10:46:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:59.702 10:46:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:59.702 10:46:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.702 10:46:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.702 10:46:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:59.961 10:46:32 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:59.961 10:46:32 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:59.961 10:46:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:59.961 10:46:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:59.961 10:46:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:59.961 10:46:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:59.961 10:46:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:00.223 10:46:32 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:00.223 10:46:32 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:00.223 Running I/O for 1 seconds... 00:36:01.610 10318.00 IOPS, 40.30 MiB/s 00:36:01.610 Latency(us) 00:36:01.610 [2024-12-09T09:46:34.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:01.610 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:01.610 nvme0n1 : 1.01 10363.73 40.48 0.00 0.00 12308.72 5170.06 18932.62 00:36:01.610 [2024-12-09T09:46:34.051Z] =================================================================================================================== 00:36:01.610 [2024-12-09T09:46:34.051Z] Total : 10363.73 40.48 0.00 0.00 12308.72 5170.06 18932.62 00:36:01.610 { 00:36:01.610 "results": [ 00:36:01.610 { 00:36:01.610 "job": "nvme0n1", 00:36:01.610 "core_mask": "0x2", 00:36:01.610 "workload": "randrw", 00:36:01.610 "percentage": 50, 00:36:01.610 "status": "finished", 00:36:01.610 "queue_depth": 128, 00:36:01.610 "io_size": 4096, 00:36:01.610 "runtime": 1.008035, 00:36:01.610 "iops": 10363.727449939734, 00:36:01.610 "mibps": 40.483310351327084, 00:36:01.610 "io_failed": 0, 00:36:01.610 "io_timeout": 0, 00:36:01.610 "avg_latency_us": 12308.724175148634, 00:36:01.610 "min_latency_us": 5170.062222222222, 00:36:01.610 "max_latency_us": 18932.62222222222 00:36:01.610 } 00:36:01.610 ], 00:36:01.610 "core_count": 1 00:36:01.610 } 00:36:01.610 10:46:33 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:01.610 10:46:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:01.610 10:46:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:01.610 10:46:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:01.610 10:46:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:01.610 10:46:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:01.610 10:46:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:01.610 10:46:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:01.870 10:46:34 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:01.870 10:46:34 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:01.870 10:46:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:01.870 10:46:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:01.870 10:46:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:01.870 10:46:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:01.870 10:46:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.129 10:46:34 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:02.129 10:46:34 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:02.129 10:46:34 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:02.129 10:46:34 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:02.129 10:46:34 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:02.129 10:46:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:02.129 10:46:34 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:02.129 10:46:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:02.129 10:46:34 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:02.129 10:46:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:02.389 [2024-12-09 10:46:34.728023] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:02.389 [2024-12-09 10:46:34.728414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfc170 (107): Transport endpoint is not connected 00:36:02.389 [2024-12-09 10:46:34.729402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfc170 (9): Bad file descriptor 00:36:02.389 [2024-12-09 10:46:34.730401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:02.389 [2024-12-09 10:46:34.730435] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:02.389 [2024-12-09 10:46:34.730448] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:02.389 [2024-12-09 10:46:34.730462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:02.389 request: 00:36:02.389 { 00:36:02.389 "name": "nvme0", 00:36:02.389 "trtype": "tcp", 00:36:02.389 "traddr": "127.0.0.1", 00:36:02.389 "adrfam": "ipv4", 00:36:02.389 "trsvcid": "4420", 00:36:02.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:02.389 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:02.389 "prchk_reftag": false, 00:36:02.389 "prchk_guard": false, 00:36:02.389 "hdgst": false, 00:36:02.389 "ddgst": false, 00:36:02.389 "psk": "key1", 00:36:02.389 "allow_unrecognized_csi": false, 00:36:02.389 "method": "bdev_nvme_attach_controller", 00:36:02.389 "req_id": 1 00:36:02.389 } 00:36:02.389 Got JSON-RPC error response 00:36:02.389 response: 00:36:02.389 { 00:36:02.389 "code": -5, 00:36:02.389 "message": "Input/output error" 00:36:02.389 } 00:36:02.389 10:46:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:02.389 10:46:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:02.389 10:46:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:02.389 10:46:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:02.389 10:46:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:02.389 10:46:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:02.389 10:46:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:02.390 10:46:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:02.390 10:46:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:02.390 10:46:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.651 10:46:35 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:02.651 10:46:35 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:02.651 10:46:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:02.651 10:46:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:02.651 10:46:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:02.651 10:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:02.651 10:46:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:02.910 10:46:35 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:02.910 10:46:35 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:02.910 10:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:03.168 10:46:35 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:03.168 10:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:03.428 10:46:35 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:03.428 10:46:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:03.428 10:46:35 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:03.997 10:46:36 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:03.997 10:46:36 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.S7CA1w34SK 00:36:03.997 10:46:36 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.S7CA1w34SK 00:36:03.997 10:46:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:03.997 10:46:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.S7CA1w34SK 00:36:03.997 10:46:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:03.997 10:46:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:03.997 10:46:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:03.997 10:46:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:03.997 10:46:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S7CA1w34SK 00:36:03.997 10:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S7CA1w34SK 00:36:03.997 [2024-12-09 10:46:36.396466] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.S7CA1w34SK': 0100660 00:36:03.997 [2024-12-09 10:46:36.396500] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:03.997 request: 00:36:03.997 { 00:36:03.997 "name": "key0", 00:36:03.997 "path": "/tmp/tmp.S7CA1w34SK", 00:36:03.997 "method": "keyring_file_add_key", 00:36:03.997 "req_id": 1 00:36:03.997 } 00:36:03.997 Got JSON-RPC error response 00:36:03.997 response: 00:36:03.997 { 00:36:03.997 "code": -1, 00:36:03.997 "message": "Operation not permitted" 00:36:03.997 } 00:36:03.997 10:46:36 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:03.997 10:46:36 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:03.997 10:46:36 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:03.997 10:46:36 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:03.997 10:46:36 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.S7CA1w34SK 00:36:03.997 10:46:36 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.S7CA1w34SK 00:36:03.997 10:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.S7CA1w34SK 00:36:04.259 10:46:36 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.S7CA1w34SK 00:36:04.519 10:46:36 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:04.519 10:46:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:04.519 10:46:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:04.519 10:46:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:04.519 10:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:04.519 10:46:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:04.779 10:46:36 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:04.779 10:46:36 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:04.779 10:46:36 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:04.779 10:46:36 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:04.779 10:46:36 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:04.779 10:46:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:04.779 10:46:36 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:04.779 10:46:36 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:04.779 10:46:36 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:04.779 10:46:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:05.040 [2024-12-09 10:46:37.234714] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.S7CA1w34SK': No such file or directory 00:36:05.040 [2024-12-09 10:46:37.234744] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:05.040 [2024-12-09 10:46:37.234780] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:05.040 [2024-12-09 10:46:37.234792] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:05.040 [2024-12-09 10:46:37.234805] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:05.040 [2024-12-09 10:46:37.234816] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:05.040 request: 00:36:05.040 { 00:36:05.040 "name": "nvme0", 00:36:05.040 "trtype": "tcp", 00:36:05.040 "traddr": "127.0.0.1", 00:36:05.040 "adrfam": "ipv4", 00:36:05.040 "trsvcid": "4420", 00:36:05.040 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:05.040 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:05.040 "prchk_reftag": false, 00:36:05.040 "prchk_guard": false, 00:36:05.040 "hdgst": false, 00:36:05.040 "ddgst": false, 00:36:05.040 "psk": "key0", 00:36:05.040 "allow_unrecognized_csi": false, 00:36:05.040 "method": "bdev_nvme_attach_controller", 00:36:05.040 "req_id": 1 00:36:05.040 } 00:36:05.040 Got JSON-RPC error response 00:36:05.040 response: 00:36:05.040 { 00:36:05.040 "code": -19, 00:36:05.040 "message": "No such device" 00:36:05.040 } 00:36:05.040 10:46:37 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:05.040 10:46:37 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:05.040 10:46:37 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:05.040 10:46:37 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:05.040 10:46:37 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:05.040 10:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:05.299 10:46:37 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:05.299 10:46:37 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:05.299 10:46:37 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:05.299 10:46:37 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:05.299 10:46:37 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:05.299 10:46:37 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:05.299 10:46:37 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4Zgb7vETTH 00:36:05.299 10:46:37 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:05.299 10:46:37 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:05.299 10:46:37 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:05.299 10:46:37 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:05.299 10:46:37 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:05.299 10:46:37 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:05.299 10:46:37 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:05.299 10:46:37 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4Zgb7vETTH 00:36:05.299 10:46:37 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4Zgb7vETTH 00:36:05.299 10:46:37 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.4Zgb7vETTH 00:36:05.300 10:46:37 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4Zgb7vETTH 00:36:05.300 10:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4Zgb7vETTH 00:36:05.559 10:46:37 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:05.559 10:46:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:05.819 nvme0n1 00:36:05.819 10:46:38 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:05.819 10:46:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:05.819 10:46:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:05.819 10:46:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:05.819 10:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:05.819 10:46:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:06.078 10:46:38 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:06.079 10:46:38 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:06.079 10:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:06.339 10:46:38 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:06.339 10:46:38 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:06.339 10:46:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.339 10:46:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:06.339 10:46:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.598 10:46:39 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:06.598 10:46:39 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:06.598 10:46:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:06.598 10:46:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:06.598 10:46:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.598 10:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.598 10:46:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:07.171 10:46:39 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:07.171 10:46:39 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:07.171 10:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:07.171 10:46:39 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:07.171 10:46:39 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:07.171 10:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:07.430 10:46:39 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:07.430 10:46:39 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4Zgb7vETTH 00:36:07.430 10:46:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4Zgb7vETTH 00:36:07.999 10:46:40 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qvydw46234 00:36:07.999 10:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qvydw46234 00:36:07.999 10:46:40 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:07.999 10:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:08.570 nvme0n1 00:36:08.570 10:46:40 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:08.570 10:46:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:08.832 10:46:41 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:08.832 "subsystems": [ 00:36:08.832 { 00:36:08.832 "subsystem": "keyring", 00:36:08.832 "config": [ 00:36:08.832 { 00:36:08.832 "method": "keyring_file_add_key", 00:36:08.832 "params": { 00:36:08.832 "name": "key0", 00:36:08.832 "path": "/tmp/tmp.4Zgb7vETTH" 00:36:08.832 } 00:36:08.832 }, 00:36:08.832 { 00:36:08.832 "method": "keyring_file_add_key", 00:36:08.832 "params": { 00:36:08.832 "name": "key1", 00:36:08.832 "path": "/tmp/tmp.qvydw46234" 00:36:08.832 } 00:36:08.832 } 00:36:08.832 ] 00:36:08.832 }, 00:36:08.832 { 00:36:08.832 "subsystem": "iobuf", 00:36:08.832 "config": [ 00:36:08.832 { 00:36:08.832 "method": "iobuf_set_options", 00:36:08.832 "params": { 00:36:08.832 "small_pool_count": 8192, 00:36:08.832 "large_pool_count": 1024, 00:36:08.832 "small_bufsize": 8192, 00:36:08.832 "large_bufsize": 135168, 00:36:08.832 "enable_numa": false 00:36:08.832 } 00:36:08.832 } 00:36:08.832 ] 00:36:08.832 }, 00:36:08.832 { 00:36:08.832 "subsystem": "sock", 00:36:08.832 "config": [ 00:36:08.832 { 00:36:08.832 "method": "sock_set_default_impl", 00:36:08.832 "params": { 00:36:08.832 "impl_name": "posix" 00:36:08.832 } 00:36:08.832 }, 00:36:08.832 { 00:36:08.832 "method": "sock_impl_set_options", 00:36:08.832 "params": { 00:36:08.832 "impl_name": "ssl", 00:36:08.832 "recv_buf_size": 4096, 00:36:08.832 "send_buf_size": 4096, 00:36:08.832 "enable_recv_pipe": true, 00:36:08.832 "enable_quickack": false, 00:36:08.832 "enable_placement_id": 0, 00:36:08.832 "enable_zerocopy_send_server": true, 00:36:08.832 "enable_zerocopy_send_client": false, 00:36:08.832 "zerocopy_threshold": 0, 00:36:08.832 "tls_version": 0, 00:36:08.832 "enable_ktls": false 00:36:08.832 } 00:36:08.832 }, 00:36:08.832 { 00:36:08.832 "method": "sock_impl_set_options", 00:36:08.832 "params": { 00:36:08.832 "impl_name": "posix", 00:36:08.832 "recv_buf_size": 2097152, 00:36:08.832 "send_buf_size": 2097152, 00:36:08.832 "enable_recv_pipe": true, 00:36:08.832 "enable_quickack": false, 00:36:08.832 "enable_placement_id": 0, 00:36:08.832 "enable_zerocopy_send_server": true, 00:36:08.832 "enable_zerocopy_send_client": false, 00:36:08.832 "zerocopy_threshold": 0, 00:36:08.832 "tls_version": 0, 00:36:08.832 "enable_ktls": false 00:36:08.832 } 00:36:08.832 } 00:36:08.832 ] 00:36:08.832 }, 00:36:08.832 { 00:36:08.832 "subsystem": "vmd", 00:36:08.832 "config": [] 00:36:08.832 }, 00:36:08.832 { 00:36:08.832 "subsystem": "accel", 00:36:08.832 "config": [ 00:36:08.832 { 00:36:08.832 "method": "accel_set_options", 00:36:08.832 "params": { 00:36:08.832 "small_cache_size": 128, 00:36:08.832 "large_cache_size": 16, 00:36:08.832 "task_count": 2048, 00:36:08.832 "sequence_count": 2048, 00:36:08.832 "buf_count": 2048 00:36:08.832 } 00:36:08.832 } 00:36:08.832 ] 00:36:08.832 }, 00:36:08.832 { 00:36:08.832 "subsystem": "bdev", 00:36:08.832 "config": [ 00:36:08.832 { 00:36:08.832 "method": "bdev_set_options", 00:36:08.832 "params": { 00:36:08.832 "bdev_io_pool_size": 65535, 00:36:08.832 "bdev_io_cache_size": 256, 00:36:08.832 "bdev_auto_examine": true, 00:36:08.832 "iobuf_small_cache_size": 128, 00:36:08.832 "iobuf_large_cache_size": 16 00:36:08.832 } 00:36:08.832 }, 00:36:08.832 { 00:36:08.832 "method": "bdev_raid_set_options", 00:36:08.832 "params": { 00:36:08.832 "process_window_size_kb": 1024, 00:36:08.832 "process_max_bandwidth_mb_sec": 0 00:36:08.832 } 00:36:08.832 }, 00:36:08.832 { 00:36:08.832 "method": "bdev_iscsi_set_options", 00:36:08.832 "params": { 00:36:08.832 "timeout_sec": 30 00:36:08.832 } 00:36:08.832 }, 00:36:08.832 { 00:36:08.832 "method": "bdev_nvme_set_options", 00:36:08.832 "params": { 00:36:08.832 "action_on_timeout": "none", 00:36:08.832 "timeout_us": 0, 00:36:08.832 "timeout_admin_us": 0, 00:36:08.832 "keep_alive_timeout_ms": 10000, 00:36:08.832 "arbitration_burst": 0, 00:36:08.832 "low_priority_weight": 0, 00:36:08.832 "medium_priority_weight": 0, 00:36:08.832 "high_priority_weight": 0, 00:36:08.832 "nvme_adminq_poll_period_us": 10000, 00:36:08.832 "nvme_ioq_poll_period_us": 0, 00:36:08.833 "io_queue_requests": 512, 00:36:08.833 "delay_cmd_submit": true, 00:36:08.833 "transport_retry_count": 4, 00:36:08.833 "bdev_retry_count": 3, 00:36:08.833 "transport_ack_timeout": 0, 00:36:08.833 "ctrlr_loss_timeout_sec": 0, 00:36:08.833 "reconnect_delay_sec": 0, 00:36:08.833 "fast_io_fail_timeout_sec": 0, 00:36:08.833 "disable_auto_failback": false, 00:36:08.833 "generate_uuids": false, 00:36:08.833 "transport_tos": 0, 00:36:08.833 "nvme_error_stat": false, 00:36:08.833 "rdma_srq_size": 0, 00:36:08.833 "io_path_stat": false, 00:36:08.833 "allow_accel_sequence": false, 00:36:08.833 "rdma_max_cq_size": 0, 00:36:08.833 "rdma_cm_event_timeout_ms": 0, 00:36:08.833 "dhchap_digests": [ 00:36:08.833 "sha256", 00:36:08.833 "sha384", 00:36:08.833 "sha512" 00:36:08.833 ], 00:36:08.833 "dhchap_dhgroups": [ 00:36:08.833 "null", 00:36:08.833 "ffdhe2048", 00:36:08.833 "ffdhe3072", 00:36:08.833 "ffdhe4096", 00:36:08.833 "ffdhe6144", 00:36:08.833 "ffdhe8192" 00:36:08.833 ] 00:36:08.833 } 00:36:08.833 }, 00:36:08.833 { 00:36:08.833 "method": "bdev_nvme_attach_controller", 00:36:08.833 "params": { 00:36:08.833 "name": "nvme0", 00:36:08.833 "trtype": "TCP", 00:36:08.833 "adrfam": "IPv4", 00:36:08.833 "traddr": "127.0.0.1", 00:36:08.833 "trsvcid": "4420", 00:36:08.833 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:08.833 "prchk_reftag": false, 00:36:08.833 "prchk_guard": false, 00:36:08.833 "ctrlr_loss_timeout_sec": 0, 00:36:08.833 "reconnect_delay_sec": 0, 00:36:08.833 "fast_io_fail_timeout_sec": 0, 00:36:08.833 "psk": "key0", 00:36:08.833 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:08.833 "hdgst": false, 00:36:08.833 "ddgst": false, 00:36:08.833 "multipath": "multipath" 00:36:08.833 } 00:36:08.833 }, 00:36:08.833 { 00:36:08.833 "method": "bdev_nvme_set_hotplug", 00:36:08.833 "params": { 00:36:08.833 "period_us": 100000, 00:36:08.833 "enable": false 00:36:08.833 } 00:36:08.833 }, 00:36:08.833 { 00:36:08.833 "method": "bdev_wait_for_examine" 00:36:08.833 } 00:36:08.833 ] 00:36:08.833 }, 00:36:08.833 { 00:36:08.833 "subsystem": "nbd", 00:36:08.833 "config": [] 00:36:08.833 } 00:36:08.833 ] 00:36:08.833 }' 00:36:08.833 10:46:41 keyring_file -- keyring/file.sh@115 -- # killprocess 2736074 00:36:08.833 10:46:41 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2736074 ']' 00:36:08.833 10:46:41 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2736074 00:36:08.833 10:46:41 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:08.833 10:46:41 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:08.833 10:46:41 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2736074 00:36:08.833 10:46:41 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:08.833 10:46:41 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:08.833 10:46:41 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2736074' 00:36:08.833 killing process with pid 2736074 00:36:08.833 10:46:41 keyring_file -- common/autotest_common.sh@973 -- # kill 2736074 00:36:08.833 Received shutdown signal, test time was about 1.000000 seconds 00:36:08.833 00:36:08.833 Latency(us) 00:36:08.833 [2024-12-09T09:46:41.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.833 [2024-12-09T09:46:41.274Z] =================================================================================================================== 00:36:08.833 [2024-12-09T09:46:41.274Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:08.833 10:46:41 keyring_file -- common/autotest_common.sh@978 -- # wait 2736074 00:36:09.093 10:46:41 keyring_file -- keyring/file.sh@118 -- # bperfpid=2737567 00:36:09.093 10:46:41 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2737567 /var/tmp/bperf.sock 00:36:09.093 10:46:41 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2737567 ']' 00:36:09.093 10:46:41 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:09.093 10:46:41 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:09.093 10:46:41 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:09.093 10:46:41 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:09.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:09.093 10:46:41 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:09.093 "subsystems": [ 00:36:09.093 { 00:36:09.093 "subsystem": "keyring", 00:36:09.093 "config": [ 00:36:09.093 { 00:36:09.093 "method": "keyring_file_add_key", 00:36:09.093 "params": { 00:36:09.093 "name": "key0", 00:36:09.093 "path": "/tmp/tmp.4Zgb7vETTH" 00:36:09.093 } 00:36:09.093 }, 00:36:09.093 { 00:36:09.093 "method": "keyring_file_add_key", 00:36:09.094 "params": { 00:36:09.094 "name": "key1", 00:36:09.094 "path": "/tmp/tmp.qvydw46234" 00:36:09.094 } 00:36:09.094 } 00:36:09.094 ] 00:36:09.094 }, 00:36:09.094 { 00:36:09.094 "subsystem": "iobuf", 00:36:09.094 "config": [ 00:36:09.094 { 00:36:09.094 "method": "iobuf_set_options", 00:36:09.094 "params": { 00:36:09.094 "small_pool_count": 8192, 00:36:09.094 "large_pool_count": 1024, 00:36:09.094 "small_bufsize": 8192, 00:36:09.094 "large_bufsize": 135168, 00:36:09.094 "enable_numa": false 00:36:09.094 } 00:36:09.094 } 00:36:09.094 ] 00:36:09.094 }, 00:36:09.094 { 00:36:09.094 "subsystem": "sock", 00:36:09.094 "config": [ 00:36:09.094 { 00:36:09.094 "method": "sock_set_default_impl", 00:36:09.094 "params": { 00:36:09.094 "impl_name": "posix" 00:36:09.094 } 00:36:09.094 }, 00:36:09.094 { 00:36:09.094 "method": "sock_impl_set_options", 00:36:09.094 "params": { 00:36:09.094 "impl_name": "ssl", 00:36:09.094 "recv_buf_size": 4096, 00:36:09.094 "send_buf_size": 4096, 00:36:09.094 "enable_recv_pipe": true, 00:36:09.094 "enable_quickack": false, 00:36:09.094 "enable_placement_id": 0, 00:36:09.094 "enable_zerocopy_send_server": true, 00:36:09.094 "enable_zerocopy_send_client": false, 00:36:09.094 "zerocopy_threshold": 0, 00:36:09.094 "tls_version": 0, 00:36:09.094 "enable_ktls": false 00:36:09.094 } 00:36:09.094 }, 00:36:09.094 { 00:36:09.094 "method": "sock_impl_set_options", 00:36:09.094 "params": { 00:36:09.094 "impl_name": "posix", 00:36:09.094 "recv_buf_size": 2097152, 00:36:09.094 "send_buf_size": 2097152, 00:36:09.094 "enable_recv_pipe": true, 00:36:09.094 "enable_quickack": false, 00:36:09.094 "enable_placement_id": 0, 00:36:09.094 "enable_zerocopy_send_server": true, 00:36:09.094 "enable_zerocopy_send_client": false, 00:36:09.094 "zerocopy_threshold": 0, 00:36:09.094 "tls_version": 0, 00:36:09.094 "enable_ktls": false 00:36:09.094 } 00:36:09.094 } 00:36:09.094 ] 00:36:09.094 }, 00:36:09.094 { 00:36:09.094 "subsystem": "vmd", 00:36:09.094 "config": [] 00:36:09.094 }, 00:36:09.094 { 00:36:09.094 "subsystem": "accel", 00:36:09.094 "config": [ 00:36:09.094 { 00:36:09.094 "method": "accel_set_options", 00:36:09.094 "params": { 00:36:09.094 "small_cache_size": 128, 00:36:09.094 "large_cache_size": 16, 00:36:09.094 "task_count": 2048, 00:36:09.094 "sequence_count": 2048, 00:36:09.094 "buf_count": 2048 00:36:09.094 } 00:36:09.094 } 00:36:09.094 ] 00:36:09.094 }, 00:36:09.094 { 00:36:09.094 "subsystem": "bdev", 00:36:09.094 "config": [ 00:36:09.094 { 00:36:09.094 "method": "bdev_set_options", 00:36:09.094 "params": { 00:36:09.094 "bdev_io_pool_size": 65535, 00:36:09.094 "bdev_io_cache_size": 256, 00:36:09.094 "bdev_auto_examine": true, 00:36:09.094 "iobuf_small_cache_size": 128, 00:36:09.094 "iobuf_large_cache_size": 16 00:36:09.094 } 00:36:09.094 }, 00:36:09.094 { 00:36:09.094 "method": "bdev_raid_set_options", 00:36:09.094 "params": { 00:36:09.094 "process_window_size_kb": 1024, 00:36:09.094 "process_max_bandwidth_mb_sec": 0 00:36:09.094 } 00:36:09.094 }, 00:36:09.094 { 00:36:09.094 "method": "bdev_iscsi_set_options", 00:36:09.094 "params": { 00:36:09.094 "timeout_sec": 30 00:36:09.094 } 00:36:09.094 }, 00:36:09.094 { 00:36:09.094 "method": "bdev_nvme_set_options", 00:36:09.094 "params": { 00:36:09.094 "action_on_timeout": "none", 00:36:09.094 "timeout_us": 0, 00:36:09.094 "timeout_admin_us": 0, 00:36:09.094 "keep_alive_timeout_ms": 10000, 00:36:09.094 "arbitration_burst": 0, 00:36:09.094 "low_priority_weight": 0, 00:36:09.094 "medium_priority_weight": 0, 00:36:09.094 "high_priority_weight": 0, 00:36:09.094 "nvme_adminq_poll_period_us": 10000, 00:36:09.094 "nvme_ioq_poll_period_us": 0, 00:36:09.094 "io_queue_requests": 512, 00:36:09.094 "delay_cmd_submit": true, 00:36:09.094 "transport_retry_count": 4, 00:36:09.094 "bdev_retry_count": 3, 00:36:09.094 "transport_ack_timeout": 0, 00:36:09.094 "ctrlr_loss_timeout_sec": 0, 00:36:09.094 "reconnect_delay_sec": 0, 00:36:09.095 "fast_io_fail_timeout_sec": 0, 00:36:09.095 "disable_auto_failback": false, 00:36:09.095 "generate_uuids": false, 00:36:09.095 "transport_tos": 0, 00:36:09.095 "nvme_error_stat": false, 00:36:09.095 "rdma_srq_size": 0, 00:36:09.095 "io_path_stat": false, 00:36:09.095 "allow_accel_sequence": false, 00:36:09.095 "rdma_max_cq_size": 0, 00:36:09.095 "rdma_cm_event_timeout_ms": 0, 00:36:09.095 "dhchap_digests": [ 00:36:09.095 "sha256", 00:36:09.095 "sha384", 00:36:09.095 "sha512" 00:36:09.095 ], 00:36:09.095 "dhchap_dhgroups": [ 00:36:09.095 "null", 00:36:09.095 "ffdhe2048", 00:36:09.095 "ffdhe3072", 00:36:09.095 "ffdhe4096", 00:36:09.095 "ffdhe6144", 00:36:09.095 "ffdhe8192" 00:36:09.095 ] 00:36:09.095 } 00:36:09.095 }, 00:36:09.095 { 00:36:09.095 "method": "bdev_nvme_attach_controller", 00:36:09.095 "params": { 00:36:09.095 "name": "nvme0", 00:36:09.095 "trtype": "TCP", 00:36:09.095 "adrfam": "IPv4", 00:36:09.095 "traddr": "127.0.0.1", 00:36:09.095 "trsvcid": "4420", 00:36:09.095 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:09.095 "prchk_reftag": false, 00:36:09.095 "prchk_guard": false, 00:36:09.095 "ctrlr_loss_timeout_sec": 0, 00:36:09.095 "reconnect_delay_sec": 0, 00:36:09.095 "fast_io_fail_timeout_sec": 0, 00:36:09.095 "psk": "key0", 00:36:09.095 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:09.095 "hdgst": false, 00:36:09.095 "ddgst": false, 00:36:09.095 "multipath": "multipath" 00:36:09.095 } 00:36:09.095 }, 00:36:09.095 { 00:36:09.095 "method": "bdev_nvme_set_hotplug", 00:36:09.095 "params": { 00:36:09.095 "period_us": 100000, 00:36:09.095 "enable": false 00:36:09.095 } 00:36:09.095 }, 00:36:09.095 { 00:36:09.095 "method": "bdev_wait_for_examine" 00:36:09.095 } 00:36:09.095 ] 00:36:09.095 }, 00:36:09.095 { 00:36:09.095 "subsystem": "nbd", 00:36:09.095 "config": [] 00:36:09.095 } 00:36:09.095 ] 00:36:09.095 }' 00:36:09.095 10:46:41 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:09.095 10:46:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:09.095 [2024-12-09 10:46:41.439986] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:36:09.095 [2024-12-09 10:46:41.440082] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2737567 ] 00:36:09.095 [2024-12-09 10:46:41.510309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.356 [2024-12-09 10:46:41.569480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.356 [2024-12-09 10:46:41.763010] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:09.616 10:46:41 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:09.616 10:46:41 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:09.616 10:46:41 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:09.616 10:46:41 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:09.616 10:46:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.874 10:46:42 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:09.874 10:46:42 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:09.874 10:46:42 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:09.874 10:46:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:09.874 10:46:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:09.874 10:46:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:09.874 10:46:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.133 10:46:42 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:10.133 10:46:42 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:10.133 10:46:42 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:10.133 10:46:42 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:10.133 10:46:42 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.133 10:46:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.133 10:46:42 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:10.392 10:46:42 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:10.392 10:46:42 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:10.392 10:46:42 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:10.392 10:46:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:10.654 10:46:42 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:10.654 10:46:42 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:10.654 10:46:42 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4Zgb7vETTH /tmp/tmp.qvydw46234 00:36:10.654 10:46:42 keyring_file -- keyring/file.sh@20 -- # killprocess 2737567 00:36:10.654 10:46:42 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2737567 ']' 00:36:10.654 10:46:42 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2737567 00:36:10.654 10:46:42 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:10.654 10:46:42 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:10.654 10:46:42 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2737567 00:36:10.654 10:46:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:10.654 10:46:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:10.654 10:46:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2737567' 00:36:10.654 killing process with pid 2737567 00:36:10.654 10:46:43 keyring_file -- common/autotest_common.sh@973 -- # kill 2737567 00:36:10.654 Received shutdown signal, test time was about 1.000000 seconds 00:36:10.654 00:36:10.654 Latency(us) 00:36:10.654 [2024-12-09T09:46:43.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:10.654 [2024-12-09T09:46:43.095Z] =================================================================================================================== 00:36:10.654 [2024-12-09T09:46:43.095Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:10.654 10:46:43 keyring_file -- common/autotest_common.sh@978 -- # wait 2737567 00:36:10.914 10:46:43 keyring_file -- keyring/file.sh@21 -- # killprocess 2736064 00:36:10.914 10:46:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2736064 ']' 00:36:10.914 10:46:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2736064 00:36:10.914 10:46:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:10.914 10:46:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:10.914 10:46:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2736064 00:36:10.914 10:46:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:10.914 10:46:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:10.914 10:46:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2736064' 00:36:10.914 killing process with pid 2736064 00:36:10.914 10:46:43 keyring_file -- common/autotest_common.sh@973 -- # kill 2736064 00:36:10.914 10:46:43 keyring_file -- common/autotest_common.sh@978 -- # wait 2736064 00:36:11.484 00:36:11.484 real 0m14.840s 00:36:11.484 user 0m37.765s 00:36:11.484 sys 0m3.237s 00:36:11.484 10:46:43 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:11.484 10:46:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:11.484 ************************************ 00:36:11.484 END TEST keyring_file 00:36:11.484 ************************************ 00:36:11.484 10:46:43 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:11.484 10:46:43 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:11.484 10:46:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:11.484 10:46:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:11.484 10:46:43 -- common/autotest_common.sh@10 -- # set +x 00:36:11.484 ************************************ 00:36:11.484 START TEST keyring_linux 00:36:11.484 ************************************ 00:36:11.484 10:46:43 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:11.484 Joined session keyring: 923128449 00:36:11.484 * Looking for test storage... 00:36:11.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:11.484 10:46:43 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:11.484 10:46:43 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:36:11.484 10:46:43 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:11.745 10:46:43 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:11.745 10:46:43 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:11.745 10:46:43 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:11.745 10:46:43 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:11.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.745 --rc genhtml_branch_coverage=1 00:36:11.745 --rc genhtml_function_coverage=1 00:36:11.745 --rc genhtml_legend=1 00:36:11.745 --rc geninfo_all_blocks=1 00:36:11.745 --rc geninfo_unexecuted_blocks=1 00:36:11.745 00:36:11.745 ' 00:36:11.745 10:46:43 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:11.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.745 --rc genhtml_branch_coverage=1 00:36:11.746 --rc genhtml_function_coverage=1 00:36:11.746 --rc genhtml_legend=1 00:36:11.746 --rc geninfo_all_blocks=1 00:36:11.746 --rc geninfo_unexecuted_blocks=1 00:36:11.746 00:36:11.746 ' 00:36:11.746 10:46:43 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:11.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.746 --rc genhtml_branch_coverage=1 00:36:11.746 --rc genhtml_function_coverage=1 00:36:11.746 --rc genhtml_legend=1 00:36:11.746 --rc geninfo_all_blocks=1 00:36:11.746 --rc geninfo_unexecuted_blocks=1 00:36:11.746 00:36:11.746 ' 00:36:11.746 10:46:43 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:11.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:11.746 --rc genhtml_branch_coverage=1 00:36:11.746 --rc genhtml_function_coverage=1 00:36:11.746 --rc genhtml_legend=1 00:36:11.746 --rc geninfo_all_blocks=1 00:36:11.746 --rc geninfo_unexecuted_blocks=1 00:36:11.746 00:36:11.746 ' 00:36:11.746 10:46:43 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:11.746 10:46:43 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:11.746 10:46:43 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:11.746 10:46:43 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:11.746 10:46:43 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:11.746 10:46:43 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:11.746 10:46:43 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.746 10:46:43 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.746 10:46:43 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.746 10:46:43 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:11.746 10:46:43 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:11.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:11.746 10:46:43 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:11.746 10:46:43 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:11.746 10:46:43 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:11.746 10:46:43 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:11.746 10:46:43 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:11.746 10:46:43 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:11.746 10:46:43 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:11.746 10:46:43 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:11.746 10:46:43 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:11.746 10:46:43 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:11.746 10:46:43 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:11.746 10:46:43 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:11.746 10:46:43 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:11.746 10:46:43 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:11.746 10:46:43 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:11.746 10:46:43 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:11.746 /tmp/:spdk-test:key0 00:36:11.746 10:46:43 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:11.746 10:46:44 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:11.746 10:46:44 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:11.746 10:46:44 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:11.746 10:46:44 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:11.746 10:46:44 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:11.746 10:46:44 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:11.746 10:46:44 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:11.746 10:46:44 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:11.746 10:46:44 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:11.746 10:46:44 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:11.746 10:46:44 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:11.746 10:46:44 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:11.746 10:46:44 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:11.746 10:46:44 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:11.746 /tmp/:spdk-test:key1 00:36:11.746 10:46:44 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2738027 00:36:11.746 10:46:44 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:11.746 10:46:44 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2738027 00:36:11.746 10:46:44 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2738027 ']' 00:36:11.746 10:46:44 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.746 10:46:44 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:11.746 10:46:44 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.746 10:46:44 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:11.746 10:46:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:11.746 [2024-12-09 10:46:44.091937] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:36:11.746 [2024-12-09 10:46:44.092035] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738027 ] 00:36:11.746 [2024-12-09 10:46:44.155268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.005 [2024-12-09 10:46:44.210846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:12.265 10:46:44 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:12.265 10:46:44 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:12.265 10:46:44 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:12.265 10:46:44 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.265 10:46:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:12.265 [2024-12-09 10:46:44.459602] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:12.265 null0 00:36:12.265 [2024-12-09 10:46:44.491642] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:12.265 [2024-12-09 10:46:44.492105] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:12.265 10:46:44 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.265 10:46:44 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:12.265 945645270 00:36:12.265 10:46:44 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:12.265 1048311762 00:36:12.265 10:46:44 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2738041 00:36:12.265 10:46:44 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:12.265 10:46:44 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2738041 /var/tmp/bperf.sock 00:36:12.265 10:46:44 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2738041 ']' 00:36:12.265 10:46:44 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:12.265 10:46:44 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:12.265 10:46:44 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:12.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:12.265 10:46:44 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:12.265 10:46:44 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:12.265 [2024-12-09 10:46:44.558162] Starting SPDK v25.01-pre git sha1 6c714c5fe / DPDK 24.03.0 initialization... 00:36:12.265 [2024-12-09 10:46:44.558245] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2738041 ] 00:36:12.265 [2024-12-09 10:46:44.623522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:12.265 [2024-12-09 10:46:44.681269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.523 10:46:44 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:12.523 10:46:44 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:12.523 10:46:44 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:12.523 10:46:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:12.782 10:46:45 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:12.782 10:46:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:13.040 10:46:45 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:13.040 10:46:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:13.298 [2024-12-09 10:46:45.665076] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:13.558 nvme0n1 00:36:13.558 10:46:45 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:13.558 10:46:45 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:13.558 10:46:45 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:13.558 10:46:45 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:13.558 10:46:45 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:13.558 10:46:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.817 10:46:46 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:13.817 10:46:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:13.817 10:46:46 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:13.817 10:46:46 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:13.817 10:46:46 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:13.817 10:46:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.817 10:46:46 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:14.077 10:46:46 keyring_linux -- keyring/linux.sh@25 -- # sn=945645270 00:36:14.077 10:46:46 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:14.077 10:46:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:14.077 10:46:46 keyring_linux -- keyring/linux.sh@26 -- # [[ 945645270 == \9\4\5\6\4\5\2\7\0 ]] 00:36:14.077 10:46:46 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 945645270 00:36:14.077 10:46:46 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:14.077 10:46:46 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:14.077 Running I/O for 1 seconds... 00:36:15.017 11457.00 IOPS, 44.75 MiB/s 00:36:15.017 Latency(us) 00:36:15.017 [2024-12-09T09:46:47.458Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:15.017 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:15.017 nvme0n1 : 1.01 11461.43 44.77 0.00 0.00 11099.23 4369.07 16019.91 00:36:15.017 [2024-12-09T09:46:47.458Z] =================================================================================================================== 00:36:15.017 [2024-12-09T09:46:47.458Z] Total : 11461.43 44.77 0.00 0.00 11099.23 4369.07 16019.91 00:36:15.017 { 00:36:15.017 "results": [ 00:36:15.017 { 00:36:15.017 "job": "nvme0n1", 00:36:15.017 "core_mask": "0x2", 00:36:15.017 "workload": "randread", 00:36:15.017 "status": "finished", 00:36:15.017 "queue_depth": 128, 00:36:15.017 "io_size": 4096, 00:36:15.017 "runtime": 1.010869, 00:36:15.017 "iops": 11461.425763377847, 00:36:15.017 "mibps": 44.771194388194715, 00:36:15.017 "io_failed": 0, 00:36:15.017 "io_timeout": 0, 00:36:15.017 "avg_latency_us": 11099.232703582229, 00:36:15.017 "min_latency_us": 4369.066666666667, 00:36:15.017 "max_latency_us": 16019.91111111111 00:36:15.017 } 00:36:15.017 ], 00:36:15.017 "core_count": 1 00:36:15.017 } 00:36:15.017 10:46:47 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:15.017 10:46:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:15.355 10:46:47 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:15.355 10:46:47 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:15.355 10:46:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:15.355 10:46:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:15.355 10:46:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:15.355 10:46:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.614 10:46:48 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:15.614 10:46:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:15.614 10:46:48 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:15.614 10:46:48 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:15.614 10:46:48 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:15.614 10:46:48 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:15.614 10:46:48 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:15.614 10:46:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:15.614 10:46:48 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:15.614 10:46:48 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:15.614 10:46:48 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:15.614 10:46:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:15.874 [2024-12-09 10:46:48.276008] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:15.874 [2024-12-09 10:46:48.276895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5cf20 (107): Transport endpoint is not connected 00:36:15.874 [2024-12-09 10:46:48.277888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5cf20 (9): Bad file descriptor 00:36:15.874 [2024-12-09 10:46:48.278887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:15.874 [2024-12-09 10:46:48.278907] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:15.874 [2024-12-09 10:46:48.278935] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:15.874 [2024-12-09 10:46:48.278949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:15.874 request: 00:36:15.874 { 00:36:15.874 "name": "nvme0", 00:36:15.874 "trtype": "tcp", 00:36:15.874 "traddr": "127.0.0.1", 00:36:15.874 "adrfam": "ipv4", 00:36:15.874 "trsvcid": "4420", 00:36:15.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.874 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.874 "prchk_reftag": false, 00:36:15.874 "prchk_guard": false, 00:36:15.874 "hdgst": false, 00:36:15.874 "ddgst": false, 00:36:15.874 "psk": ":spdk-test:key1", 00:36:15.874 "allow_unrecognized_csi": false, 00:36:15.874 "method": "bdev_nvme_attach_controller", 00:36:15.874 "req_id": 1 00:36:15.874 } 00:36:15.874 Got JSON-RPC error response 00:36:15.874 response: 00:36:15.874 { 00:36:15.874 "code": -5, 00:36:15.874 "message": "Input/output error" 00:36:15.874 } 00:36:15.874 10:46:48 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:15.874 10:46:48 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:15.874 10:46:48 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:15.874 10:46:48 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@33 -- # sn=945645270 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 945645270 00:36:15.874 1 links removed 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@33 -- # sn=1048311762 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1048311762 00:36:15.874 1 links removed 00:36:15.874 10:46:48 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2738041 00:36:15.874 10:46:48 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2738041 ']' 00:36:15.874 10:46:48 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2738041 00:36:15.874 10:46:48 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:15.874 10:46:48 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:15.874 10:46:48 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2738041 00:36:16.133 10:46:48 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:16.133 10:46:48 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:16.133 10:46:48 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2738041' 00:36:16.133 killing process with pid 2738041 00:36:16.133 10:46:48 keyring_linux -- common/autotest_common.sh@973 -- # kill 2738041 00:36:16.133 Received shutdown signal, test time was about 1.000000 seconds 00:36:16.133 00:36:16.133 Latency(us) 00:36:16.133 [2024-12-09T09:46:48.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.133 [2024-12-09T09:46:48.574Z] =================================================================================================================== 00:36:16.133 [2024-12-09T09:46:48.574Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:16.133 10:46:48 keyring_linux -- common/autotest_common.sh@978 -- # wait 2738041 00:36:16.393 10:46:48 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2738027 00:36:16.393 10:46:48 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2738027 ']' 00:36:16.393 10:46:48 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2738027 00:36:16.393 10:46:48 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:16.393 10:46:48 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:16.393 10:46:48 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2738027 00:36:16.393 10:46:48 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:16.393 10:46:48 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:16.393 10:46:48 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2738027' 00:36:16.393 killing process with pid 2738027 00:36:16.393 10:46:48 keyring_linux -- common/autotest_common.sh@973 -- # kill 2738027 00:36:16.393 10:46:48 keyring_linux -- common/autotest_common.sh@978 -- # wait 2738027 00:36:16.654 00:36:16.654 real 0m5.232s 00:36:16.654 user 0m10.455s 00:36:16.654 sys 0m1.571s 00:36:16.654 10:46:49 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:16.654 10:46:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:16.654 ************************************ 00:36:16.654 END TEST keyring_linux 00:36:16.654 ************************************ 00:36:16.654 10:46:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:16.654 10:46:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:16.654 10:46:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:16.654 10:46:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:16.654 10:46:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:16.654 10:46:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:16.654 10:46:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:16.654 10:46:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:16.654 10:46:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:16.654 10:46:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:16.654 10:46:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:16.654 10:46:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:16.654 10:46:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:16.654 10:46:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:16.654 10:46:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:16.654 10:46:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:16.654 10:46:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:16.654 10:46:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:16.654 10:46:49 -- common/autotest_common.sh@10 -- # set +x 00:36:16.654 10:46:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:16.654 10:46:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:16.654 10:46:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:16.654 10:46:49 -- common/autotest_common.sh@10 -- # set +x 00:36:18.561 INFO: APP EXITING 00:36:18.561 INFO: killing all VMs 00:36:18.561 INFO: killing vhost app 00:36:18.561 INFO: EXIT DONE 00:36:19.963 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:19.963 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:19.963 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:19.963 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:19.963 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:19.963 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:19.963 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:19.963 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:19.963 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:36:19.963 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:19.963 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:19.963 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:19.963 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:19.963 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:20.222 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:20.222 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:20.222 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:21.603 Cleaning 00:36:21.603 Removing: /var/run/dpdk/spdk0/config 00:36:21.603 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:21.603 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:21.603 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:21.603 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:21.603 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:21.603 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:21.603 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:21.603 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:21.603 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:21.603 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:21.603 Removing: /var/run/dpdk/spdk1/config 00:36:21.603 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:21.603 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:21.603 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:21.603 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:21.603 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:21.603 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:21.603 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:21.603 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:21.603 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:21.603 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:21.603 Removing: /var/run/dpdk/spdk2/config 00:36:21.603 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:21.603 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:21.603 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:21.603 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:21.603 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:21.603 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:21.603 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:21.603 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:21.603 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:21.603 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:21.603 Removing: /var/run/dpdk/spdk3/config 00:36:21.603 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:21.603 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:21.603 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:21.603 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:21.603 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:21.603 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:21.603 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:21.603 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:21.603 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:21.603 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:21.603 Removing: /var/run/dpdk/spdk4/config 00:36:21.603 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:21.603 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:21.603 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:21.603 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:21.603 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:21.603 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:21.603 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:21.603 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:21.603 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:21.603 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:21.603 Removing: /dev/shm/bdev_svc_trace.1 00:36:21.603 Removing: /dev/shm/nvmf_trace.0 00:36:21.603 Removing: /dev/shm/spdk_tgt_trace.pid2415435 00:36:21.603 Removing: /var/run/dpdk/spdk0 00:36:21.603 Removing: /var/run/dpdk/spdk1 00:36:21.603 Removing: /var/run/dpdk/spdk2 00:36:21.603 Removing: /var/run/dpdk/spdk3 00:36:21.603 Removing: /var/run/dpdk/spdk4 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2413750 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2414494 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2415435 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2415767 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2416460 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2416600 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2417313 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2417444 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2417705 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2419001 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2420442 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2420886 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2421091 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2421306 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2421508 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2421669 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2421932 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2422128 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2422343 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2424829 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2424993 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2425270 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2425282 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2425713 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2425718 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2426149 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2426158 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2426325 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2426456 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2426620 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2426629 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2427127 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2427285 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2427609 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2429728 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2432361 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2439358 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2439775 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2442297 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2442579 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2445106 00:36:21.603 Removing: /var/run/dpdk/spdk_pid2448955 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2451136 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2458064 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2463423 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2464638 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2465408 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2475681 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2478098 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2505889 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2509176 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2513005 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2517394 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2517406 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2518057 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2518678 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2519261 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2519656 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2519781 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2519923 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2520059 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2520062 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2520724 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2521379 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2522039 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2522444 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2522448 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2522677 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2523604 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2524446 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2530292 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2558239 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2561165 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2562342 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2563659 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2563801 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2563943 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2564091 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2564532 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2565856 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2566716 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2567147 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2568767 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2569193 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2569638 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2572033 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2575437 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2575438 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2575439 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2577771 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2583142 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2585806 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2589568 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2590513 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2591727 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2592695 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2595469 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2598052 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2600425 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2604662 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2604664 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2607572 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2607715 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2607962 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2608236 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2608245 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2611010 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2611459 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2614133 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2616110 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2620164 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2623499 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2629990 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2634469 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2634472 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2646731 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2647252 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2647662 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2648079 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2648664 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2649183 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2649679 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2650112 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2653128 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2653391 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2657190 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2657247 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2660610 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2663222 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2670148 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2670555 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2673056 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2673333 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2675844 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2679673 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2681836 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2688828 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2694029 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2695222 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2695880 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2706085 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2708336 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2710525 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2715572 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2715586 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2718485 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2719975 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2722019 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2722795 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2724198 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2725073 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2730484 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2730871 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2731263 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2732821 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2733218 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2733618 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2736064 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2736074 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2737567 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2738027 00:36:21.864 Removing: /var/run/dpdk/spdk_pid2738041 00:36:21.864 Clean 00:36:22.124 10:46:54 -- common/autotest_common.sh@1453 -- # return 0 00:36:22.124 10:46:54 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:22.124 10:46:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:22.124 10:46:54 -- common/autotest_common.sh@10 -- # set +x 00:36:22.124 10:46:54 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:22.124 10:46:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:22.124 10:46:54 -- common/autotest_common.sh@10 -- # set +x 00:36:22.124 10:46:54 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:22.124 10:46:54 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:22.124 10:46:54 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:22.124 10:46:54 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:22.124 10:46:54 -- spdk/autotest.sh@398 -- # hostname 00:36:22.124 10:46:54 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:22.383 geninfo: WARNING: invalid characters removed from testname! 00:36:54.475 10:47:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:57.027 10:47:29 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:00.335 10:47:32 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:02.882 10:47:35 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:06.187 10:47:38 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:09.495 10:47:41 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:12.046 10:47:44 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:12.046 10:47:44 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:12.046 10:47:44 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:12.046 10:47:44 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:12.046 10:47:44 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:12.046 10:47:44 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:12.046 + [[ -n 2343184 ]] 00:37:12.046 + sudo kill 2343184 00:37:12.063 [Pipeline] } 00:37:12.077 [Pipeline] // stage 00:37:12.081 [Pipeline] } 00:37:12.095 [Pipeline] // timeout 00:37:12.100 [Pipeline] } 00:37:12.113 [Pipeline] // catchError 00:37:12.118 [Pipeline] } 00:37:12.132 [Pipeline] // wrap 00:37:12.136 [Pipeline] } 00:37:12.146 [Pipeline] // catchError 00:37:12.165 [Pipeline] stage 00:37:12.167 [Pipeline] { (Epilogue) 00:37:12.179 [Pipeline] catchError 00:37:12.181 [Pipeline] { 00:37:12.194 [Pipeline] echo 00:37:12.196 Cleanup processes 00:37:12.202 [Pipeline] sh 00:37:12.810 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:12.810 2748746 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:12.825 [Pipeline] sh 00:37:13.110 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:13.110 ++ grep -v 'sudo pgrep' 00:37:13.110 ++ awk '{print $1}' 00:37:13.110 + sudo kill -9 00:37:13.110 + true 00:37:13.119 [Pipeline] sh 00:37:13.402 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:23.432 [Pipeline] sh 00:37:23.783 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:23.783 Artifacts sizes are good 00:37:23.924 [Pipeline] archiveArtifacts 00:37:23.971 Archiving artifacts 00:37:24.555 [Pipeline] sh 00:37:24.853 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:24.871 [Pipeline] cleanWs 00:37:24.887 [WS-CLEANUP] Deleting project workspace... 00:37:24.887 [WS-CLEANUP] Deferred wipeout is used... 00:37:24.902 [WS-CLEANUP] done 00:37:24.904 [Pipeline] } 00:37:24.926 [Pipeline] // catchError 00:37:24.942 [Pipeline] sh 00:37:25.239 + logger -p user.info -t JENKINS-CI 00:37:25.251 [Pipeline] } 00:37:25.264 [Pipeline] // stage 00:37:25.270 [Pipeline] } 00:37:25.286 [Pipeline] // node 00:37:25.293 [Pipeline] End of Pipeline 00:37:25.333 Finished: SUCCESS